Oct 01 11:17:21 localhost kernel: Linux version 5.14.0-617.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025
Oct 01 11:17:21 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 01 11:17:21 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 01 11:17:21 localhost kernel: BIOS-provided physical RAM map:
Oct 01 11:17:21 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 01 11:17:21 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 01 11:17:21 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 01 11:17:21 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 01 11:17:21 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 01 11:17:21 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 01 11:17:21 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 01 11:17:21 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 01 11:17:21 localhost kernel: NX (Execute Disable) protection: active
Oct 01 11:17:21 localhost kernel: APIC: Static calls initialized
Oct 01 11:17:21 localhost kernel: SMBIOS 2.8 present.
Oct 01 11:17:21 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 01 11:17:21 localhost kernel: Hypervisor detected: KVM
Oct 01 11:17:21 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 01 11:17:21 localhost kernel: kvm-clock: using sched offset of 4113748075 cycles
Oct 01 11:17:21 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 01 11:17:21 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 01 11:17:21 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 01 11:17:21 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 01 11:17:21 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 01 11:17:21 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 01 11:17:21 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 01 11:17:21 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 01 11:17:21 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 01 11:17:21 localhost kernel: Using GB pages for direct mapping
Oct 01 11:17:21 localhost kernel: RAMDISK: [mem 0x2d7d0000-0x32bdffff]
Oct 01 11:17:21 localhost kernel: ACPI: Early table checksum verification disabled
Oct 01 11:17:21 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 01 11:17:21 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 11:17:21 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 11:17:21 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 11:17:21 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 01 11:17:21 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 11:17:21 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 01 11:17:21 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 01 11:17:21 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 01 11:17:21 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 01 11:17:21 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 01 11:17:21 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 01 11:17:21 localhost kernel: No NUMA configuration found
Oct 01 11:17:21 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 01 11:17:21 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct 01 11:17:21 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 01 11:17:21 localhost kernel: Zone ranges:
Oct 01 11:17:21 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 01 11:17:21 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 01 11:17:21 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 01 11:17:21 localhost kernel:   Device   empty
Oct 01 11:17:21 localhost kernel: Movable zone start for each node
Oct 01 11:17:21 localhost kernel: Early memory node ranges
Oct 01 11:17:21 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 01 11:17:21 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 01 11:17:21 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 01 11:17:21 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 01 11:17:21 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 01 11:17:21 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 01 11:17:21 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 01 11:17:21 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 01 11:17:21 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 01 11:17:21 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 01 11:17:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 01 11:17:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 01 11:17:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 01 11:17:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 01 11:17:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 01 11:17:21 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 01 11:17:21 localhost kernel: TSC deadline timer available
Oct 01 11:17:21 localhost kernel: CPU topo: Max. logical packages:   8
Oct 01 11:17:21 localhost kernel: CPU topo: Max. logical dies:       8
Oct 01 11:17:21 localhost kernel: CPU topo: Max. dies per package:   1
Oct 01 11:17:21 localhost kernel: CPU topo: Max. threads per core:   1
Oct 01 11:17:21 localhost kernel: CPU topo: Num. cores per package:     1
Oct 01 11:17:21 localhost kernel: CPU topo: Num. threads per package:   1
Oct 01 11:17:21 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 01 11:17:21 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 01 11:17:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 01 11:17:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 01 11:17:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 01 11:17:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 01 11:17:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 01 11:17:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 01 11:17:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 01 11:17:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 01 11:17:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 01 11:17:21 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 01 11:17:21 localhost kernel: Booting paravirtualized kernel on KVM
Oct 01 11:17:21 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 01 11:17:21 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 01 11:17:21 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 01 11:17:21 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 01 11:17:21 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 01 11:17:21 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 01 11:17:21 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 01 11:17:21 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64", will be passed to user space.
Oct 01 11:17:21 localhost kernel: random: crng init done
Oct 01 11:17:21 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 01 11:17:21 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 01 11:17:21 localhost kernel: Fallback order for Node 0: 0 
Oct 01 11:17:21 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 01 11:17:21 localhost kernel: Policy zone: Normal
Oct 01 11:17:21 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 01 11:17:21 localhost kernel: software IO TLB: area num 8.
Oct 01 11:17:21 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 01 11:17:21 localhost kernel: ftrace: allocating 49329 entries in 193 pages
Oct 01 11:17:21 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 01 11:17:21 localhost kernel: Dynamic Preempt: voluntary
Oct 01 11:17:21 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 01 11:17:21 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 01 11:17:21 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 01 11:17:21 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 01 11:17:21 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 01 11:17:21 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 01 11:17:21 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 01 11:17:21 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 01 11:17:21 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 01 11:17:21 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 01 11:17:21 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 01 11:17:21 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 01 11:17:21 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 01 11:17:21 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 01 11:17:21 localhost kernel: Console: colour VGA+ 80x25
Oct 01 11:17:21 localhost kernel: printk: console [ttyS0] enabled
Oct 01 11:17:21 localhost kernel: ACPI: Core revision 20230331
Oct 01 11:17:21 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 01 11:17:21 localhost kernel: x2apic enabled
Oct 01 11:17:21 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 01 11:17:21 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 01 11:17:21 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 01 11:17:21 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 01 11:17:21 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 01 11:17:21 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 01 11:17:21 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 01 11:17:21 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 01 11:17:21 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 01 11:17:21 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 01 11:17:21 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 01 11:17:21 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 01 11:17:21 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 01 11:17:21 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 01 11:17:21 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 01 11:17:21 localhost kernel: x86/bugs: return thunk changed
Oct 01 11:17:21 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 01 11:17:21 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 01 11:17:21 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 01 11:17:21 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 01 11:17:21 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 01 11:17:21 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 01 11:17:21 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 01 11:17:21 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 01 11:17:21 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 01 11:17:21 localhost kernel: landlock: Up and running.
Oct 01 11:17:21 localhost kernel: Yama: becoming mindful.
Oct 01 11:17:21 localhost kernel: SELinux:  Initializing.
Oct 01 11:17:21 localhost kernel: LSM support for eBPF active
Oct 01 11:17:21 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 01 11:17:21 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 01 11:17:21 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 01 11:17:21 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 01 11:17:21 localhost kernel: ... version:                0
Oct 01 11:17:21 localhost kernel: ... bit width:              48
Oct 01 11:17:21 localhost kernel: ... generic registers:      6
Oct 01 11:17:21 localhost kernel: ... value mask:             0000ffffffffffff
Oct 01 11:17:21 localhost kernel: ... max period:             00007fffffffffff
Oct 01 11:17:21 localhost kernel: ... fixed-purpose events:   0
Oct 01 11:17:21 localhost kernel: ... event mask:             000000000000003f
Oct 01 11:17:21 localhost kernel: signal: max sigframe size: 1776
Oct 01 11:17:21 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 01 11:17:21 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 01 11:17:21 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 01 11:17:21 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 01 11:17:21 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 01 11:17:21 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 01 11:17:21 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 01 11:17:21 localhost kernel: node 0 deferred pages initialised in 23ms
Oct 01 11:17:21 localhost kernel: Memory: 7765416K/8388068K available (16384K kernel code, 5784K rwdata, 13988K rodata, 4072K init, 7304K bss, 616492K reserved, 0K cma-reserved)
Oct 01 11:17:21 localhost kernel: devtmpfs: initialized
Oct 01 11:17:21 localhost kernel: x86/mm: Memory block size: 128MB
Oct 01 11:17:21 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 01 11:17:21 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 01 11:17:21 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 01 11:17:21 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 01 11:17:21 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 01 11:17:21 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 01 11:17:21 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 01 11:17:21 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 01 11:17:21 localhost kernel: audit: type=2000 audit(1759317439.555:1): state=initialized audit_enabled=0 res=1
Oct 01 11:17:21 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 01 11:17:21 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 01 11:17:21 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 01 11:17:21 localhost kernel: cpuidle: using governor menu
Oct 01 11:17:21 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 01 11:17:21 localhost kernel: PCI: Using configuration type 1 for base access
Oct 01 11:17:21 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 01 11:17:21 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 01 11:17:21 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 01 11:17:21 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 01 11:17:21 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 01 11:17:21 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 01 11:17:21 localhost kernel: Demotion targets for Node 0: null
Oct 01 11:17:21 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 01 11:17:21 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 01 11:17:21 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 01 11:17:21 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 01 11:17:21 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 01 11:17:21 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 01 11:17:21 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 01 11:17:21 localhost kernel: ACPI: Interpreter enabled
Oct 01 11:17:21 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 01 11:17:21 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 01 11:17:21 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 01 11:17:21 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 01 11:17:21 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 01 11:17:21 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 01 11:17:21 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [3] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [4] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [5] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [6] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [7] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [8] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [9] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [10] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [11] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [12] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [13] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [14] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [15] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [16] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [17] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [18] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [19] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [20] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [21] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [22] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [23] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [24] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [25] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [26] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [27] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [28] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [29] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [30] registered
Oct 01 11:17:21 localhost kernel: acpiphp: Slot [31] registered
Oct 01 11:17:21 localhost kernel: PCI host bridge to bus 0000:00
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 01 11:17:21 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 01 11:17:21 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 01 11:17:21 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 01 11:17:21 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 01 11:17:21 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 01 11:17:21 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 01 11:17:21 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 01 11:17:21 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 01 11:17:21 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 01 11:17:21 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 01 11:17:21 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 01 11:17:21 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 01 11:17:21 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 01 11:17:21 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 01 11:17:21 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 01 11:17:21 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 01 11:17:21 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 01 11:17:21 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 01 11:17:21 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 01 11:17:21 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 01 11:17:21 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 01 11:17:21 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 01 11:17:21 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 01 11:17:21 localhost kernel: iommu: Default domain type: Translated
Oct 01 11:17:21 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 01 11:17:21 localhost kernel: SCSI subsystem initialized
Oct 01 11:17:21 localhost kernel: ACPI: bus type USB registered
Oct 01 11:17:21 localhost kernel: usbcore: registered new interface driver usbfs
Oct 01 11:17:21 localhost kernel: usbcore: registered new interface driver hub
Oct 01 11:17:21 localhost kernel: usbcore: registered new device driver usb
Oct 01 11:17:21 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 01 11:17:21 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 01 11:17:21 localhost kernel: PTP clock support registered
Oct 01 11:17:21 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 01 11:17:21 localhost kernel: NetLabel: Initializing
Oct 01 11:17:21 localhost kernel: NetLabel:  domain hash size = 128
Oct 01 11:17:21 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 01 11:17:21 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 01 11:17:21 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 01 11:17:21 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 01 11:17:21 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 01 11:17:21 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 01 11:17:21 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 01 11:17:21 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 01 11:17:21 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 01 11:17:21 localhost kernel: vgaarb: loaded
Oct 01 11:17:21 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 01 11:17:21 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 01 11:17:21 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 01 11:17:21 localhost kernel: pnp: PnP ACPI init
Oct 01 11:17:21 localhost kernel: pnp 00:03: [dma 2]
Oct 01 11:17:21 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 01 11:17:21 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 01 11:17:21 localhost kernel: NET: Registered PF_INET protocol family
Oct 01 11:17:21 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 01 11:17:21 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 01 11:17:21 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 01 11:17:21 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 01 11:17:21 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 01 11:17:21 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 01 11:17:21 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 01 11:17:21 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 01 11:17:21 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 01 11:17:21 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 01 11:17:21 localhost kernel: NET: Registered PF_XDP protocol family
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 01 11:17:21 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 01 11:17:21 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 01 11:17:21 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 01 11:17:21 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 99463 usecs
Oct 01 11:17:21 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 01 11:17:21 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 01 11:17:21 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 01 11:17:21 localhost kernel: ACPI: bus type thunderbolt registered
Oct 01 11:17:21 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 01 11:17:21 localhost kernel: Initialise system trusted keyrings
Oct 01 11:17:21 localhost kernel: Key type blacklist registered
Oct 01 11:17:21 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 01 11:17:21 localhost kernel: zbud: loaded
Oct 01 11:17:21 localhost kernel: integrity: Platform Keyring initialized
Oct 01 11:17:21 localhost kernel: integrity: Machine keyring initialized
Oct 01 11:17:21 localhost kernel: Freeing initrd memory: 86080K
Oct 01 11:17:21 localhost kernel: NET: Registered PF_ALG protocol family
Oct 01 11:17:21 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 01 11:17:21 localhost kernel: Key type asymmetric registered
Oct 01 11:17:21 localhost kernel: Asymmetric key parser 'x509' registered
Oct 01 11:17:21 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 01 11:17:21 localhost kernel: io scheduler mq-deadline registered
Oct 01 11:17:21 localhost kernel: io scheduler kyber registered
Oct 01 11:17:21 localhost kernel: io scheduler bfq registered
Oct 01 11:17:21 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 01 11:17:21 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 01 11:17:21 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 01 11:17:21 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 01 11:17:21 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 01 11:17:21 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 01 11:17:21 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 01 11:17:21 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 01 11:17:21 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 01 11:17:21 localhost kernel: Non-volatile memory driver v1.3
Oct 01 11:17:21 localhost kernel: rdac: device handler registered
Oct 01 11:17:21 localhost kernel: hp_sw: device handler registered
Oct 01 11:17:21 localhost kernel: emc: device handler registered
Oct 01 11:17:21 localhost kernel: alua: device handler registered
Oct 01 11:17:21 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 01 11:17:21 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 01 11:17:21 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 01 11:17:21 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 01 11:17:21 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 01 11:17:21 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 01 11:17:21 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 01 11:17:21 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-617.el9.x86_64 uhci_hcd
Oct 01 11:17:21 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 01 11:17:21 localhost kernel: hub 1-0:1.0: USB hub found
Oct 01 11:17:21 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 01 11:17:21 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 01 11:17:21 localhost kernel: usbserial: USB Serial support registered for generic
Oct 01 11:17:21 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 01 11:17:21 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 01 11:17:21 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 01 11:17:21 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 01 11:17:21 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 01 11:17:21 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 01 11:17:21 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 01 11:17:21 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-01T11:17:20 UTC (1759317440)
Oct 01 11:17:21 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 01 11:17:21 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 01 11:17:21 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 01 11:17:21 localhost kernel: usbcore: registered new interface driver usbhid
Oct 01 11:17:21 localhost kernel: usbhid: USB HID core driver
Oct 01 11:17:21 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 01 11:17:21 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 01 11:17:21 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 01 11:17:21 localhost kernel: Initializing XFRM netlink socket
Oct 01 11:17:21 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 01 11:17:21 localhost kernel: Segment Routing with IPv6
Oct 01 11:17:21 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 01 11:17:21 localhost kernel: mpls_gso: MPLS GSO support
Oct 01 11:17:21 localhost kernel: IPI shorthand broadcast: enabled
Oct 01 11:17:21 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 01 11:17:21 localhost kernel: AES CTR mode by8 optimization enabled
Oct 01 11:17:21 localhost kernel: sched_clock: Marking stable (1269003682, 138896220)->(1526996574, -119096672)
Oct 01 11:17:21 localhost kernel: registered taskstats version 1
Oct 01 11:17:21 localhost kernel: Loading compiled-in X.509 certificates
Oct 01 11:17:21 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Oct 01 11:17:21 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 01 11:17:21 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 01 11:17:21 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 01 11:17:21 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 01 11:17:21 localhost kernel: Demotion targets for Node 0: null
Oct 01 11:17:21 localhost kernel: page_owner is disabled
Oct 01 11:17:21 localhost kernel: Key type .fscrypt registered
Oct 01 11:17:21 localhost kernel: Key type fscrypt-provisioning registered
Oct 01 11:17:21 localhost kernel: Key type big_key registered
Oct 01 11:17:21 localhost kernel: Key type encrypted registered
Oct 01 11:17:21 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 01 11:17:21 localhost kernel: Loading compiled-in module X.509 certificates
Oct 01 11:17:21 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bb2966091bafcba340f8183756023c985dcc8fe9'
Oct 01 11:17:21 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 01 11:17:21 localhost kernel: ima: No architecture policies found
Oct 01 11:17:21 localhost kernel: evm: Initialising EVM extended attributes:
Oct 01 11:17:21 localhost kernel: evm: security.selinux
Oct 01 11:17:21 localhost kernel: evm: security.SMACK64 (disabled)
Oct 01 11:17:21 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 01 11:17:21 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 01 11:17:21 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 01 11:17:21 localhost kernel: evm: security.apparmor (disabled)
Oct 01 11:17:21 localhost kernel: evm: security.ima
Oct 01 11:17:21 localhost kernel: evm: security.capability
Oct 01 11:17:21 localhost kernel: evm: HMAC attrs: 0x1
Oct 01 11:17:21 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 01 11:17:21 localhost kernel: Running certificate verification RSA selftest
Oct 01 11:17:21 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 01 11:17:21 localhost kernel: Running certificate verification ECDSA selftest
Oct 01 11:17:21 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 01 11:17:21 localhost kernel: clk: Disabling unused clocks
Oct 01 11:17:21 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 01 11:17:21 localhost kernel: Freeing unused kernel image (initmem) memory: 4072K
Oct 01 11:17:21 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 01 11:17:21 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 348K
Oct 01 11:17:21 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 01 11:17:21 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 01 11:17:21 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 01 11:17:21 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 01 11:17:21 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 01 11:17:21 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 01 11:17:21 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 01 11:17:21 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 01 11:17:21 localhost kernel: Run /init as init process
Oct 01 11:17:21 localhost kernel:   with arguments:
Oct 01 11:17:21 localhost kernel:     /init
Oct 01 11:17:21 localhost kernel:   with environment:
Oct 01 11:17:21 localhost kernel:     HOME=/
Oct 01 11:17:21 localhost kernel:     TERM=linux
Oct 01 11:17:21 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64
Oct 01 11:17:21 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 01 11:17:21 localhost systemd[1]: Detected virtualization kvm.
Oct 01 11:17:21 localhost systemd[1]: Detected architecture x86-64.
Oct 01 11:17:21 localhost systemd[1]: Running in initrd.
Oct 01 11:17:21 localhost systemd[1]: No hostname configured, using default hostname.
Oct 01 11:17:21 localhost systemd[1]: Hostname set to <localhost>.
Oct 01 11:17:21 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 01 11:17:21 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 01 11:17:21 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 01 11:17:21 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 01 11:17:21 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 01 11:17:21 localhost systemd[1]: Reached target Local File Systems.
Oct 01 11:17:21 localhost systemd[1]: Reached target Path Units.
Oct 01 11:17:21 localhost systemd[1]: Reached target Slice Units.
Oct 01 11:17:21 localhost systemd[1]: Reached target Swaps.
Oct 01 11:17:21 localhost systemd[1]: Reached target Timer Units.
Oct 01 11:17:21 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 01 11:17:21 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 01 11:17:21 localhost systemd[1]: Listening on Journal Socket.
Oct 01 11:17:21 localhost systemd[1]: Listening on udev Control Socket.
Oct 01 11:17:21 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 01 11:17:21 localhost systemd[1]: Reached target Socket Units.
Oct 01 11:17:21 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 01 11:17:21 localhost systemd[1]: Starting Journal Service...
Oct 01 11:17:21 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 01 11:17:21 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 01 11:17:21 localhost systemd[1]: Starting Create System Users...
Oct 01 11:17:21 localhost systemd[1]: Starting Setup Virtual Console...
Oct 01 11:17:21 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 01 11:17:21 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 01 11:17:21 localhost systemd-journald[307]: Journal started
Oct 01 11:17:21 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/adf090e1fe934ff6a8f54224f2f21059) is 8.0M, max 153.5M, 145.5M free.
Oct 01 11:17:21 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Oct 01 11:17:21 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Oct 01 11:17:21 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 01 11:17:21 localhost systemd[1]: Started Journal Service.
Oct 01 11:17:21 localhost systemd[1]: Finished Create System Users.
Oct 01 11:17:21 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 01 11:17:21 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 01 11:17:21 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 01 11:17:21 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 01 11:17:21 localhost systemd[1]: Finished Setup Virtual Console.
Oct 01 11:17:21 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 01 11:17:21 localhost systemd[1]: Starting dracut cmdline hook...
Oct 01 11:17:21 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Oct 01 11:17:21 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-617.el9.x86_64 root=UUID=d6a81468-b74c-4055-b485-def635ab40f8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 01 11:17:21 localhost systemd[1]: Finished dracut cmdline hook.
Oct 01 11:17:21 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 01 11:17:21 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 01 11:17:21 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 01 11:17:21 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 01 11:17:21 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 01 11:17:21 localhost kernel: RPC: Registered udp transport module.
Oct 01 11:17:21 localhost kernel: RPC: Registered tcp transport module.
Oct 01 11:17:21 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 01 11:17:21 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 01 11:17:21 localhost rpc.statd[443]: Version 2.5.4 starting
Oct 01 11:17:21 localhost rpc.statd[443]: Initializing NSM state
Oct 01 11:17:21 localhost rpc.idmapd[448]: Setting log level to 0
Oct 01 11:17:21 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 01 11:17:21 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 01 11:17:22 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Oct 01 11:17:22 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 01 11:17:22 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 01 11:17:22 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 01 11:17:22 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 01 11:17:22 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 01 11:17:22 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 01 11:17:22 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 01 11:17:22 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 01 11:17:22 localhost systemd[1]: Reached target Network.
Oct 01 11:17:22 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 01 11:17:22 localhost systemd[1]: Starting dracut initqueue hook...
Oct 01 11:17:22 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 01 11:17:22 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 01 11:17:22 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 01 11:17:22 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 01 11:17:22 localhost systemd[1]: Reached target System Initialization.
Oct 01 11:17:22 localhost systemd[1]: Reached target Basic System.
Oct 01 11:17:22 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 01 11:17:22 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 01 11:17:22 localhost kernel:  vda: vda1
Oct 01 11:17:22 localhost kernel: libata version 3.00 loaded.
Oct 01 11:17:22 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 01 11:17:22 localhost kernel: scsi host0: ata_piix
Oct 01 11:17:22 localhost kernel: scsi host1: ata_piix
Oct 01 11:17:22 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 01 11:17:22 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 01 11:17:22 localhost systemd-udevd[463]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 11:17:22 localhost systemd[1]: Found device /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Oct 01 11:17:22 localhost systemd[1]: Reached target Initrd Root Device.
Oct 01 11:17:22 localhost kernel: ata1: found unknown device (class 0)
Oct 01 11:17:22 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 01 11:17:22 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 01 11:17:22 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 01 11:17:22 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 01 11:17:22 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 01 11:17:22 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 01 11:17:22 localhost systemd[1]: Finished dracut initqueue hook.
Oct 01 11:17:22 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 01 11:17:22 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 01 11:17:22 localhost systemd[1]: Reached target Remote File Systems.
Oct 01 11:17:22 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 01 11:17:22 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 01 11:17:22 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8...
Oct 01 11:17:22 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Oct 01 11:17:22 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/d6a81468-b74c-4055-b485-def635ab40f8.
Oct 01 11:17:22 localhost systemd[1]: Mounting /sysroot...
Oct 01 11:17:23 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 01 11:17:23 localhost kernel: XFS (vda1): Mounting V5 Filesystem d6a81468-b74c-4055-b485-def635ab40f8
Oct 01 11:17:23 localhost kernel: XFS (vda1): Ending clean mount
Oct 01 11:17:23 localhost systemd[1]: Mounted /sysroot.
Oct 01 11:17:23 localhost systemd[1]: Reached target Initrd Root File System.
Oct 01 11:17:23 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 01 11:17:23 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 01 11:17:23 localhost systemd[1]: Reached target Initrd File Systems.
Oct 01 11:17:23 localhost systemd[1]: Reached target Initrd Default Target.
Oct 01 11:17:23 localhost systemd[1]: Starting dracut mount hook...
Oct 01 11:17:23 localhost systemd[1]: Finished dracut mount hook.
Oct 01 11:17:23 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 01 11:17:23 localhost rpc.idmapd[448]: exiting on signal 15
Oct 01 11:17:23 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 01 11:17:23 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 01 11:17:23 localhost systemd[1]: Stopped target Network.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Timer Units.
Oct 01 11:17:23 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 01 11:17:23 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Basic System.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Path Units.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Remote File Systems.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Slice Units.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Socket Units.
Oct 01 11:17:23 localhost systemd[1]: Stopped target System Initialization.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Local File Systems.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Swaps.
Oct 01 11:17:23 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped dracut mount hook.
Oct 01 11:17:23 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 01 11:17:23 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 01 11:17:23 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 01 11:17:23 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 01 11:17:23 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 01 11:17:23 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 01 11:17:23 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 01 11:17:23 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 01 11:17:23 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 01 11:17:23 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 01 11:17:23 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 01 11:17:23 localhost systemd[1]: systemd-udevd.service: Consumed 1.056s CPU time.
Oct 01 11:17:23 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 01 11:17:23 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Closed udev Control Socket.
Oct 01 11:17:23 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Closed udev Kernel Socket.
Oct 01 11:17:23 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 01 11:17:23 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 01 11:17:23 localhost systemd[1]: Starting Cleanup udev Database...
Oct 01 11:17:23 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 01 11:17:23 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 01 11:17:23 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Stopped Create System Users.
Oct 01 11:17:23 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 01 11:17:23 localhost systemd[1]: Finished Cleanup udev Database.
Oct 01 11:17:23 localhost systemd[1]: Reached target Switch Root.
Oct 01 11:17:23 localhost systemd[1]: Starting Switch Root...
Oct 01 11:17:23 localhost systemd[1]: Switching root.
Oct 01 11:17:23 localhost systemd-journald[307]: Journal stopped
Oct 01 11:17:24 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Oct 01 11:17:24 localhost kernel: audit: type=1404 audit(1759317443.926:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 01 11:17:24 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 11:17:24 localhost kernel: SELinux:  policy capability open_perms=1
Oct 01 11:17:24 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 11:17:24 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 01 11:17:24 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 11:17:24 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 11:17:24 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 11:17:24 localhost kernel: audit: type=1403 audit(1759317444.091:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 01 11:17:24 localhost systemd[1]: Successfully loaded SELinux policy in 170.312ms.
Oct 01 11:17:24 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.850ms.
Oct 01 11:17:24 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 01 11:17:24 localhost systemd[1]: Detected virtualization kvm.
Oct 01 11:17:24 localhost systemd[1]: Detected architecture x86-64.
Oct 01 11:17:24 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 11:17:24 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 01 11:17:24 localhost systemd[1]: Stopped Switch Root.
Oct 01 11:17:24 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 01 11:17:24 localhost systemd[1]: Created slice Slice /system/getty.
Oct 01 11:17:24 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 01 11:17:24 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 01 11:17:24 localhost systemd[1]: Created slice User and Session Slice.
Oct 01 11:17:24 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 01 11:17:24 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 01 11:17:24 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 01 11:17:24 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 01 11:17:24 localhost systemd[1]: Stopped target Switch Root.
Oct 01 11:17:24 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 01 11:17:24 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 01 11:17:24 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 01 11:17:24 localhost systemd[1]: Reached target Path Units.
Oct 01 11:17:24 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 01 11:17:24 localhost systemd[1]: Reached target Slice Units.
Oct 01 11:17:24 localhost systemd[1]: Reached target Swaps.
Oct 01 11:17:24 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 01 11:17:24 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 01 11:17:24 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 01 11:17:24 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 01 11:17:24 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 01 11:17:24 localhost systemd[1]: Listening on udev Control Socket.
Oct 01 11:17:24 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 01 11:17:24 localhost systemd[1]: Mounting Huge Pages File System...
Oct 01 11:17:24 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 01 11:17:24 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 01 11:17:24 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 01 11:17:24 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 01 11:17:24 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 01 11:17:24 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 01 11:17:24 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 01 11:17:24 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 01 11:17:24 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 01 11:17:24 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 01 11:17:24 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 01 11:17:24 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 01 11:17:24 localhost systemd[1]: Stopped Journal Service.
Oct 01 11:17:24 localhost systemd[1]: Starting Journal Service...
Oct 01 11:17:24 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 01 11:17:24 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 01 11:17:24 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 01 11:17:24 localhost kernel: fuse: init (API version 7.37)
Oct 01 11:17:24 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 01 11:17:24 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 01 11:17:24 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 01 11:17:24 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 01 11:17:24 localhost systemd-journald[679]: Journal started
Oct 01 11:17:24 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Oct 01 11:17:24 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 01 11:17:24 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 01 11:17:24 localhost systemd[1]: Started Journal Service.
Oct 01 11:17:24 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 01 11:17:24 localhost systemd[1]: Mounted Huge Pages File System.
Oct 01 11:17:24 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 01 11:17:24 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 01 11:17:24 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 01 11:17:24 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 01 11:17:24 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 01 11:17:24 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 01 11:17:24 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 01 11:17:24 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 01 11:17:24 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 01 11:17:24 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 01 11:17:24 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 01 11:17:24 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 01 11:17:24 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 01 11:17:24 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 01 11:17:24 localhost systemd[1]: Mounting FUSE Control File System...
Oct 01 11:17:24 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 01 11:17:24 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 01 11:17:24 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 01 11:17:24 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 01 11:17:24 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 01 11:17:24 localhost systemd[1]: Starting Create System Users...
Oct 01 11:17:24 localhost systemd[1]: Mounted FUSE Control File System.
Oct 01 11:17:24 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/21983c68f36a73745cc172a394ebc51d) is 8.0M, max 153.5M, 145.5M free.
Oct 01 11:17:24 localhost systemd-journald[679]: Received client request to flush runtime journal.
Oct 01 11:17:24 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 01 11:17:24 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 01 11:17:24 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 01 11:17:24 localhost kernel: ACPI: bus type drm_connector registered
Oct 01 11:17:24 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 01 11:17:24 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 01 11:17:24 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 01 11:17:24 localhost systemd[1]: Finished Create System Users.
Oct 01 11:17:24 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 01 11:17:24 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 01 11:17:25 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 01 11:17:25 localhost systemd[1]: Reached target Local File Systems.
Oct 01 11:17:25 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 01 11:17:25 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 01 11:17:25 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 01 11:17:25 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 01 11:17:25 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 01 11:17:25 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 01 11:17:25 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 01 11:17:25 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Oct 01 11:17:25 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 01 11:17:25 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 01 11:17:25 localhost systemd[1]: Starting Security Auditing Service...
Oct 01 11:17:25 localhost systemd[1]: Starting RPC Bind...
Oct 01 11:17:25 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 01 11:17:25 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 01 11:17:25 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 01 11:17:25 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 01 11:17:25 localhost systemd[1]: Started RPC Bind.
Oct 01 11:17:25 localhost augenrules[710]: /sbin/augenrules: No change
Oct 01 11:17:25 localhost augenrules[725]: No rules
Oct 01 11:17:25 localhost augenrules[725]: enabled 1
Oct 01 11:17:25 localhost augenrules[725]: failure 1
Oct 01 11:17:25 localhost augenrules[725]: pid 705
Oct 01 11:17:25 localhost augenrules[725]: rate_limit 0
Oct 01 11:17:25 localhost augenrules[725]: backlog_limit 8192
Oct 01 11:17:25 localhost augenrules[725]: lost 0
Oct 01 11:17:25 localhost augenrules[725]: backlog 3
Oct 01 11:17:25 localhost augenrules[725]: backlog_wait_time 60000
Oct 01 11:17:25 localhost augenrules[725]: backlog_wait_time_actual 0
Oct 01 11:17:25 localhost augenrules[725]: enabled 1
Oct 01 11:17:25 localhost augenrules[725]: failure 1
Oct 01 11:17:25 localhost augenrules[725]: pid 705
Oct 01 11:17:25 localhost augenrules[725]: rate_limit 0
Oct 01 11:17:25 localhost augenrules[725]: backlog_limit 8192
Oct 01 11:17:25 localhost augenrules[725]: lost 0
Oct 01 11:17:25 localhost augenrules[725]: backlog 0
Oct 01 11:17:25 localhost augenrules[725]: backlog_wait_time 60000
Oct 01 11:17:25 localhost augenrules[725]: backlog_wait_time_actual 0
Oct 01 11:17:25 localhost augenrules[725]: enabled 1
Oct 01 11:17:25 localhost augenrules[725]: failure 1
Oct 01 11:17:25 localhost augenrules[725]: pid 705
Oct 01 11:17:25 localhost augenrules[725]: rate_limit 0
Oct 01 11:17:25 localhost augenrules[725]: backlog_limit 8192
Oct 01 11:17:25 localhost augenrules[725]: lost 0
Oct 01 11:17:25 localhost augenrules[725]: backlog 0
Oct 01 11:17:25 localhost augenrules[725]: backlog_wait_time 60000
Oct 01 11:17:25 localhost augenrules[725]: backlog_wait_time_actual 0
Oct 01 11:17:25 localhost systemd[1]: Started Security Auditing Service.
Oct 01 11:17:25 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 01 11:17:25 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 01 11:17:25 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 01 11:17:25 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 01 11:17:25 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Oct 01 11:17:25 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 01 11:17:25 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 01 11:17:25 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 01 11:17:25 localhost systemd[1]: Starting Update is Completed...
Oct 01 11:17:25 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 01 11:17:25 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 01 11:17:25 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 01 11:17:25 localhost systemd[1]: Finished Update is Completed.
Oct 01 11:17:25 localhost systemd-udevd[748]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 11:17:25 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 01 11:17:25 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 01 11:17:25 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 01 11:17:25 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 01 11:17:25 localhost systemd[1]: Reached target System Initialization.
Oct 01 11:17:25 localhost systemd[1]: Started dnf makecache --timer.
Oct 01 11:17:25 localhost systemd[1]: Started Daily rotation of log files.
Oct 01 11:17:25 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 01 11:17:25 localhost systemd[1]: Reached target Timer Units.
Oct 01 11:17:25 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 01 11:17:25 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 01 11:17:25 localhost systemd[1]: Reached target Socket Units.
Oct 01 11:17:25 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 01 11:17:25 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 01 11:17:25 localhost kernel: kvm_amd: TSC scaling supported
Oct 01 11:17:25 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 01 11:17:25 localhost kernel: kvm_amd: Nested Paging enabled
Oct 01 11:17:25 localhost kernel: kvm_amd: LBR virtualization supported
Oct 01 11:17:25 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 01 11:17:25 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 01 11:17:25 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 01 11:17:25 localhost dbus-broker-lau[784]: Ready
Oct 01 11:17:25 localhost kernel: Console: switching to colour dummy device 80x25
Oct 01 11:17:25 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 01 11:17:25 localhost kernel: [drm] features: -context_init
Oct 01 11:17:25 localhost kernel: [drm] number of scanouts: 1
Oct 01 11:17:25 localhost kernel: [drm] number of cap sets: 0
Oct 01 11:17:25 localhost systemd[1]: Reached target Basic System.
Oct 01 11:17:25 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 01 11:17:25 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 01 11:17:25 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 01 11:17:25 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 01 11:17:25 localhost systemd[1]: Starting NTP client/server...
Oct 01 11:17:25 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 01 11:17:25 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 01 11:17:25 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 01 11:17:25 localhost systemd[1]: Started irqbalance daemon.
Oct 01 11:17:25 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 01 11:17:25 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 11:17:25 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 11:17:25 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 11:17:25 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 01 11:17:25 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 01 11:17:25 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 01 11:17:25 localhost systemd[1]: Starting User Login Management...
Oct 01 11:17:25 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 01 11:17:25 localhost chronyd[828]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 01 11:17:25 localhost chronyd[828]: Loaded 0 symmetric keys
Oct 01 11:17:25 localhost chronyd[828]: Using right/UTC timezone to obtain leap second data
Oct 01 11:17:25 localhost chronyd[828]: Loaded seccomp filter (level 2)
Oct 01 11:17:25 localhost systemd[1]: Started NTP client/server.
Oct 01 11:17:25 localhost systemd-logind[818]: New seat seat0.
Oct 01 11:17:25 localhost systemd-logind[818]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 01 11:17:25 localhost systemd-logind[818]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 01 11:17:25 localhost systemd[1]: Started User Login Management.
Oct 01 11:17:25 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 01 11:17:25 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 01 11:17:26 localhost iptables.init[812]: iptables: Applying firewall rules: [  OK  ]
Oct 01 11:17:26 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 01 11:17:26 localhost cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 01 Oct 2025 11:17:26 +0000. Up 7.30 seconds.
Oct 01 11:17:26 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 01 11:17:26 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 01 11:17:26 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpoevjftbr.mount: Deactivated successfully.
Oct 01 11:17:26 localhost systemd[1]: Starting Hostname Service...
Oct 01 11:17:27 localhost systemd[1]: Started Hostname Service.
Oct 01 11:17:27 np0005464214.novalocal systemd-hostnamed[856]: Hostname set to <np0005464214.novalocal> (static)
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Reached target Preparation for Network.
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Starting Network Manager...
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.2574] NetworkManager (version 1.54.1-1.el9) is starting... (boot:59648e32-2da2-4a47-989c-dbddfc6922f6)
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.2581] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.2754] manager[0x55a7c5ae3080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.2820] hostname: hostname: using hostnamed
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.2820] hostname: static hostname changed from (none) to "np0005464214.novalocal"
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.2826] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3007] manager[0x55a7c5ae3080]: rfkill: Wi-Fi hardware radio set enabled
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3008] manager[0x55a7c5ae3080]: rfkill: WWAN hardware radio set enabled
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3137] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3138] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3139] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3140] manager: Networking is enabled by state file
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3143] settings: Loaded settings plugin: keyfile (internal)
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3188] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3231] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3260] dhcp: init: Using DHCP client 'internal'
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3265] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3287] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3306] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3319] device (lo): Activation: starting connection 'lo' (71a0a298-c086-43ce-b223-7fae93260bdf)
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3334] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3340] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3386] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3393] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3396] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3399] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3402] device (eth0): carrier: link connected
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3405] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3416] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3429] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Started Network Manager.
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3435] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3436] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3440] manager: NetworkManager state is now CONNECTING
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3442] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3454] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Reached target Network.
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3458] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3680] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3683] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 01 11:17:27 np0005464214.novalocal NetworkManager[860]: <info>  [1759317447.3695] device (lo): Activation: successful, device activated.
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Reached target NFS client services.
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: Reached target Remote File Systems.
Oct 01 11:17:27 np0005464214.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 01 11:17:30 np0005464214.novalocal NetworkManager[860]: <info>  [1759317450.1266] dhcp4 (eth0): state changed new lease, address=38.102.83.245
Oct 01 11:17:30 np0005464214.novalocal NetworkManager[860]: <info>  [1759317450.1282] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 01 11:17:30 np0005464214.novalocal NetworkManager[860]: <info>  [1759317450.1306] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 11:17:30 np0005464214.novalocal NetworkManager[860]: <info>  [1759317450.1357] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 11:17:30 np0005464214.novalocal NetworkManager[860]: <info>  [1759317450.1362] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 11:17:30 np0005464214.novalocal NetworkManager[860]: <info>  [1759317450.1372] manager: NetworkManager state is now CONNECTED_SITE
Oct 01 11:17:30 np0005464214.novalocal NetworkManager[860]: <info>  [1759317450.1385] device (eth0): Activation: successful, device activated.
Oct 01 11:17:30 np0005464214.novalocal NetworkManager[860]: <info>  [1759317450.1397] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 01 11:17:30 np0005464214.novalocal NetworkManager[860]: <info>  [1759317450.1405] manager: startup complete
Oct 01 11:17:30 np0005464214.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 01 11:17:30 np0005464214.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 01 Oct 2025 11:17:30 +0000. Up 11.17 seconds.
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.245         | 255.255.255.0 | global | fa:16:3e:d5:7e:d5 |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fed5:7ed5/64 |       .       |  link  | fa:16:3e:d5:7e:d5 |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Oct 01 11:17:30 np0005464214.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 01 11:17:31 np0005464214.novalocal useradd[993]: new group: name=cloud-user, GID=1001
Oct 01 11:17:31 np0005464214.novalocal useradd[993]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 01 11:17:31 np0005464214.novalocal useradd[993]: add 'cloud-user' to group 'adm'
Oct 01 11:17:31 np0005464214.novalocal useradd[993]: add 'cloud-user' to group 'systemd-journal'
Oct 01 11:17:31 np0005464214.novalocal useradd[993]: add 'cloud-user' to shadow group 'adm'
Oct 01 11:17:31 np0005464214.novalocal useradd[993]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: Generating public/private rsa key pair.
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: The key fingerprint is:
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: SHA256:pgrCOGtIpezjdKJA13ruYmzWKAoe0CGFYbphp2b4ifw root@np0005464214.novalocal
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: The key's randomart image is:
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: +---[RSA 3072]----+
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |.+.              |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |+.               |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |+...             |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |o=oo.            |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |==+. .  S        |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |O*...  o         |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |@*=oo..          |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |BO=Ooo           |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |Bo*E+o           |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: The key fingerprint is:
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: SHA256:j6kliYfLomOaGvhSLYQUgFXJbclTNJRXm38fv4fWQvA root@np0005464214.novalocal
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: The key's randomart image is:
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: +---[ECDSA 256]---+
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |+ooo.+ ==. ..    |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |..  o * ...  o   |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |..   . . .  o    |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |. .         ..   |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: | . .    S    o...|
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |. o .o . +    E.+|
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |o. .o + + .  . oo|
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |o=.. o +      + +|
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |O+..o .      . o.|
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: The key fingerprint is:
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: SHA256:ee4z9OjMklqOKhDc7ejH0A06DGzQVKg0IWRWpgYYxWQ root@np0005464214.novalocal
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: The key's randomart image is:
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: +--[ED25519 256]--+
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |*%E=.            |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |B+*              |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |=+o .            |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |+= . o   .       |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |..o = o S .      |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |.  * o . o.      |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: | .. +   .o.o     |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |  .. o +o++ .    |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: |   .o.o..o=o     |
Oct 01 11:17:31 np0005464214.novalocal cloud-init[924]: +----[SHA256]-----+
Oct 01 11:17:31 np0005464214.novalocal sm-notify[1008]: Version 2.5.4 starting
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Reached target Network is Online.
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Starting System Logging Service...
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Starting Permit User Sessions...
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Finished Permit User Sessions.
Oct 01 11:17:31 np0005464214.novalocal sshd[1010]: Server listening on 0.0.0.0 port 22.
Oct 01 11:17:31 np0005464214.novalocal sshd[1010]: Server listening on :: port 22.
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Started Command Scheduler.
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Started Getty on tty1.
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Reached target Login Prompts.
Oct 01 11:17:31 np0005464214.novalocal crond[1012]: (CRON) STARTUP (1.5.7)
Oct 01 11:17:31 np0005464214.novalocal crond[1012]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 01 11:17:31 np0005464214.novalocal crond[1012]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 24% if used.)
Oct 01 11:17:31 np0005464214.novalocal crond[1012]: (CRON) INFO (running with inotify support)
Oct 01 11:17:31 np0005464214.novalocal rsyslogd[1009]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1009" x-info="https://www.rsyslog.com"] start
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Started System Logging Service.
Oct 01 11:17:31 np0005464214.novalocal rsyslogd[1009]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Reached target Multi-User System.
Oct 01 11:17:31 np0005464214.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 01 11:17:32 np0005464214.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 01 11:17:32 np0005464214.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 01 11:17:32 np0005464214.novalocal rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1022]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 01 Oct 2025 11:17:32 +0000. Up 12.90 seconds.
Oct 01 11:17:32 np0005464214.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 01 11:17:32 np0005464214.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1026]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 01 Oct 2025 11:17:32 +0000. Up 13.30 seconds.
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1028]: #############################################################
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1029]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1031]: 256 SHA256:j6kliYfLomOaGvhSLYQUgFXJbclTNJRXm38fv4fWQvA root@np0005464214.novalocal (ECDSA)
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1033]: 256 SHA256:ee4z9OjMklqOKhDc7ejH0A06DGzQVKg0IWRWpgYYxWQ root@np0005464214.novalocal (ED25519)
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1036]: 3072 SHA256:pgrCOGtIpezjdKJA13ruYmzWKAoe0CGFYbphp2b4ifw root@np0005464214.novalocal (RSA)
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1037]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1038]: #############################################################
Oct 01 11:17:32 np0005464214.novalocal sshd-session[1035]: Connection reset by 38.102.83.114 port 36874 [preauth]
Oct 01 11:17:32 np0005464214.novalocal sshd-session[1043]: Unable to negotiate with 38.102.83.114 port 36890: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 01 11:17:32 np0005464214.novalocal sshd-session[1045]: Connection reset by 38.102.83.114 port 36904 [preauth]
Oct 01 11:17:32 np0005464214.novalocal cloud-init[1026]: Cloud-init v. 24.4-7.el9 finished at Wed, 01 Oct 2025 11:17:32 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.56 seconds
Oct 01 11:17:32 np0005464214.novalocal sshd-session[1047]: Unable to negotiate with 38.102.83.114 port 36918: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 01 11:17:32 np0005464214.novalocal sshd-session[1049]: Unable to negotiate with 38.102.83.114 port 36928: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 01 11:17:32 np0005464214.novalocal sshd-session[1051]: Connection reset by 38.102.83.114 port 36940 [preauth]
Oct 01 11:17:32 np0005464214.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 01 11:17:32 np0005464214.novalocal systemd[1]: Reached target Cloud-init target.
Oct 01 11:17:32 np0005464214.novalocal systemd[1]: Startup finished in 1.659s (kernel) + 2.978s (initrd) + 9.009s (userspace) = 13.647s.
Oct 01 11:17:32 np0005464214.novalocal sshd-session[1055]: Unable to negotiate with 38.102.83.114 port 36958: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 01 11:17:32 np0005464214.novalocal sshd-session[1057]: Unable to negotiate with 38.102.83.114 port 36960: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 01 11:17:33 np0005464214.novalocal sshd-session[1053]: Connection closed by 38.102.83.114 port 36950 [preauth]
Oct 01 11:17:35 np0005464214.novalocal chronyd[828]: Selected source 54.39.196.172 (2.centos.pool.ntp.org)
Oct 01 11:17:35 np0005464214.novalocal chronyd[828]: System clock TAI offset set to 37 seconds
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: IRQ 25 affinity is now unmanaged
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: IRQ 31 affinity is now unmanaged
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: IRQ 28 affinity is now unmanaged
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: IRQ 32 affinity is now unmanaged
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: IRQ 30 affinity is now unmanaged
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 01 11:17:36 np0005464214.novalocal irqbalance[814]: IRQ 29 affinity is now unmanaged
Oct 01 11:17:40 np0005464214.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 11:17:57 np0005464214.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 01 11:28:38 np0005464214.novalocal sshd-session[1065]: Invalid user carlos from 80.94.95.116 port 49066
Oct 01 11:28:38 np0005464214.novalocal sshd-session[1065]: Connection closed by invalid user carlos 80.94.95.116 port 49066 [preauth]
Oct 01 11:29:54 np0005464214.novalocal systemd[1]: Starting dnf makecache...
Oct 01 11:29:54 np0005464214.novalocal dnf[1069]: Failed determining last makecache time.
Oct 01 11:29:55 np0005464214.novalocal dnf[1069]: CentOS Stream 9 - BaseOS                         47 kB/s | 6.7 kB     00:00
Oct 01 11:29:55 np0005464214.novalocal dnf[1069]: CentOS Stream 9 - BaseOS                        9.8 MB/s | 8.8 MB     00:00
Oct 01 11:29:57 np0005464214.novalocal dnf[1069]: CentOS Stream 9 - AppStream                      28 kB/s | 6.8 kB     00:00
Oct 01 11:29:58 np0005464214.novalocal dnf[1069]: CentOS Stream 9 - AppStream                      18 MB/s |  25 MB     00:01
Oct 01 11:30:04 np0005464214.novalocal dnf[1069]: CentOS Stream 9 - CRB                            26 kB/s | 6.6 kB     00:00
Oct 01 11:30:05 np0005464214.novalocal dnf[1069]: CentOS Stream 9 - CRB                           7.8 MB/s | 7.1 MB     00:00
Oct 01 11:30:07 np0005464214.novalocal dnf[1069]: CentOS Stream 9 - Extras packages                33 kB/s | 8.0 kB     00:00
Oct 01 11:30:08 np0005464214.novalocal dnf[1069]: Metadata cache created.
Oct 01 11:30:08 np0005464214.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 01 11:30:08 np0005464214.novalocal systemd[1]: Finished dnf makecache.
Oct 01 11:30:08 np0005464214.novalocal systemd[1]: dnf-makecache.service: Consumed 10.379s CPU time.
Oct 01 11:32:54 np0005464214.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Oct 01 11:32:54 np0005464214.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 01 11:32:54 np0005464214.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Oct 01 11:32:54 np0005464214.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 01 11:39:28 np0005464214.novalocal sshd-session[1096]: Connection closed by 80.82.70.133 port 60000
Oct 01 11:39:35 np0005464214.novalocal sshd-session[1098]: Connection closed by 94.102.49.155 port 4604
Oct 01 11:39:35 np0005464214.novalocal sshd-session[1099]: Connection closed by 94.102.49.155 port 16484 [preauth]
Oct 01 11:39:44 np0005464214.novalocal sshd-session[1101]: Connection closed by authenticating user nobody 185.156.73.233 port 52242 [preauth]
Oct 01 11:42:45 np0005464214.novalocal sshd-session[1104]: Invalid user admin from 78.128.112.74 port 55790
Oct 01 11:42:45 np0005464214.novalocal sshd-session[1104]: Connection closed by invalid user admin 78.128.112.74 port 55790 [preauth]
Oct 01 11:45:14 np0005464214.novalocal sshd-session[1107]: Invalid user admin from 80.94.95.25 port 10817
Oct 01 11:45:14 np0005464214.novalocal sshd-session[1107]: Received disconnect from 80.94.95.25 port 10817:11: Bye [preauth]
Oct 01 11:45:14 np0005464214.novalocal sshd-session[1107]: Disconnected from invalid user admin 80.94.95.25 port 10817 [preauth]
Oct 01 11:52:12 np0005464214.novalocal sshd-session[1112]: Connection closed by authenticating user root 185.156.73.233 port 19646 [preauth]
Oct 01 12:00:40 np0005464214.novalocal sshd-session[1118]: Invalid user seekcy from 49.49.32.245 port 59642
Oct 01 12:00:40 np0005464214.novalocal sshd-session[1118]: Received disconnect from 49.49.32.245 port 59642:11: Bye Bye [preauth]
Oct 01 12:00:40 np0005464214.novalocal sshd-session[1118]: Disconnected from invalid user seekcy 49.49.32.245 port 59642 [preauth]
Oct 01 12:00:41 np0005464214.novalocal sshd-session[1120]: Invalid user pavan from 217.154.42.86 port 38288
Oct 01 12:00:41 np0005464214.novalocal sshd-session[1120]: Received disconnect from 217.154.42.86 port 38288:11: Bye Bye [preauth]
Oct 01 12:00:41 np0005464214.novalocal sshd-session[1120]: Disconnected from invalid user pavan 217.154.42.86 port 38288 [preauth]
Oct 01 12:00:54 np0005464214.novalocal sshd-session[1122]: Invalid user ventas from 121.142.87.218 port 52100
Oct 01 12:00:54 np0005464214.novalocal sshd-session[1122]: Received disconnect from 121.142.87.218 port 52100:11: Bye Bye [preauth]
Oct 01 12:00:54 np0005464214.novalocal sshd-session[1122]: Disconnected from invalid user ventas 121.142.87.218 port 52100 [preauth]
Oct 01 12:01:01 np0005464214.novalocal CROND[1126]: (root) CMD (run-parts /etc/cron.hourly)
Oct 01 12:01:02 np0005464214.novalocal run-parts[1129]: (/etc/cron.hourly) starting 0anacron
Oct 01 12:01:02 np0005464214.novalocal anacron[1137]: Anacron started on 2025-10-01
Oct 01 12:01:02 np0005464214.novalocal run-parts[1139]: (/etc/cron.hourly) finished 0anacron
Oct 01 12:01:02 np0005464214.novalocal CROND[1125]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 01 12:01:02 np0005464214.novalocal anacron[1137]: Will run job `cron.daily' in 13 min.
Oct 01 12:01:02 np0005464214.novalocal anacron[1137]: Will run job `cron.weekly' in 33 min.
Oct 01 12:01:02 np0005464214.novalocal anacron[1137]: Will run job `cron.monthly' in 53 min.
Oct 01 12:01:02 np0005464214.novalocal anacron[1137]: Jobs will be executed sequentially
Oct 01 12:01:37 np0005464214.novalocal sshd-session[1143]: Invalid user rootftp from 45.249.247.86 port 36392
Oct 01 12:01:37 np0005464214.novalocal sshd-session[1143]: Received disconnect from 45.249.247.86 port 36392:11: Bye Bye [preauth]
Oct 01 12:01:37 np0005464214.novalocal sshd-session[1143]: Disconnected from invalid user rootftp 45.249.247.86 port 36392 [preauth]
Oct 01 12:03:15 np0005464214.novalocal sshd[1010]: Timeout before authentication for connection from 14.103.83.66 to 38.102.83.245, pid = 1142
Oct 01 12:03:25 np0005464214.novalocal sshd-session[1145]: Invalid user factorio from 217.154.42.86 port 53294
Oct 01 12:03:25 np0005464214.novalocal sshd-session[1145]: Received disconnect from 217.154.42.86 port 53294:11: Bye Bye [preauth]
Oct 01 12:03:25 np0005464214.novalocal sshd-session[1145]: Disconnected from invalid user factorio 217.154.42.86 port 53294 [preauth]
Oct 01 12:03:36 np0005464214.novalocal sshd-session[1147]: Invalid user grafana from 175.126.166.172 port 57662
Oct 01 12:03:36 np0005464214.novalocal sshd-session[1147]: Received disconnect from 175.126.166.172 port 57662:11: Bye Bye [preauth]
Oct 01 12:03:36 np0005464214.novalocal sshd-session[1147]: Disconnected from invalid user grafana 175.126.166.172 port 57662 [preauth]
Oct 01 12:04:03 np0005464214.novalocal sshd-session[1149]: Invalid user test from 49.49.32.245 port 49254
Oct 01 12:04:03 np0005464214.novalocal sshd-session[1149]: Received disconnect from 49.49.32.245 port 49254:11: Bye Bye [preauth]
Oct 01 12:04:03 np0005464214.novalocal sshd-session[1149]: Disconnected from invalid user test 49.49.32.245 port 49254 [preauth]
Oct 01 12:04:19 np0005464214.novalocal sshd-session[1151]: Received disconnect from 45.249.247.86 port 43792:11: Bye Bye [preauth]
Oct 01 12:04:19 np0005464214.novalocal sshd-session[1151]: Disconnected from authenticating user root 45.249.247.86 port 43792 [preauth]
Oct 01 12:04:24 np0005464214.novalocal sshd-session[1153]: Received disconnect from 217.154.42.86 port 38794:11: Bye Bye [preauth]
Oct 01 12:04:24 np0005464214.novalocal sshd-session[1153]: Disconnected from authenticating user root 217.154.42.86 port 38794 [preauth]
Oct 01 12:04:31 np0005464214.novalocal sshd-session[1155]: Invalid user bruno from 121.142.87.218 port 45008
Oct 01 12:04:31 np0005464214.novalocal sshd-session[1155]: Received disconnect from 121.142.87.218 port 45008:11: Bye Bye [preauth]
Oct 01 12:04:31 np0005464214.novalocal sshd-session[1155]: Disconnected from invalid user bruno 121.142.87.218 port 45008 [preauth]
Oct 01 12:04:52 np0005464214.novalocal sshd-session[1159]: Received disconnect from 175.126.166.172 port 44854:11: Bye Bye [preauth]
Oct 01 12:04:52 np0005464214.novalocal sshd-session[1159]: Disconnected from authenticating user root 175.126.166.172 port 44854 [preauth]
Oct 01 12:05:11 np0005464214.novalocal sshd-session[1158]: error: kex_exchange_identification: read: Connection reset by peer
Oct 01 12:05:11 np0005464214.novalocal sshd-session[1158]: Connection reset by 45.140.17.97 port 20924
Oct 01 12:05:24 np0005464214.novalocal sshd-session[1161]: Invalid user ramesh from 49.49.32.245 port 44418
Oct 01 12:05:24 np0005464214.novalocal sshd-session[1161]: Received disconnect from 49.49.32.245 port 44418:11: Bye Bye [preauth]
Oct 01 12:05:24 np0005464214.novalocal sshd-session[1161]: Disconnected from invalid user ramesh 49.49.32.245 port 44418 [preauth]
Oct 01 12:05:25 np0005464214.novalocal sshd-session[1163]: Invalid user seekcy from 217.154.42.86 port 55012
Oct 01 12:05:25 np0005464214.novalocal sshd-session[1163]: Received disconnect from 217.154.42.86 port 55012:11: Bye Bye [preauth]
Oct 01 12:05:25 np0005464214.novalocal sshd-session[1163]: Disconnected from invalid user seekcy 217.154.42.86 port 55012 [preauth]
Oct 01 12:05:43 np0005464214.novalocal sshd-session[1165]: Received disconnect from 121.142.87.218 port 39838:11: Bye Bye [preauth]
Oct 01 12:05:43 np0005464214.novalocal sshd-session[1165]: Disconnected from authenticating user root 121.142.87.218 port 39838 [preauth]
Oct 01 12:05:49 np0005464214.novalocal sshd-session[1167]: Connection closed by 191.220.115.223 port 46202 [preauth]
Oct 01 12:06:01 np0005464214.novalocal sshd-session[1169]: Invalid user seekcy from 45.249.247.86 port 58430
Oct 01 12:06:01 np0005464214.novalocal sshd-session[1169]: Received disconnect from 45.249.247.86 port 58430:11: Bye Bye [preauth]
Oct 01 12:06:01 np0005464214.novalocal sshd-session[1169]: Disconnected from invalid user seekcy 45.249.247.86 port 58430 [preauth]
Oct 01 12:06:04 np0005464214.novalocal sshd-session[1171]: Received disconnect from 175.126.166.172 port 39476:11: Bye Bye [preauth]
Oct 01 12:06:04 np0005464214.novalocal sshd-session[1171]: Disconnected from authenticating user root 175.126.166.172 port 39476 [preauth]
Oct 01 12:06:18 np0005464214.novalocal sshd-session[1173]: Invalid user ftpuser from 217.154.42.86 port 39708
Oct 01 12:06:18 np0005464214.novalocal sshd-session[1173]: Received disconnect from 217.154.42.86 port 39708:11: Bye Bye [preauth]
Oct 01 12:06:18 np0005464214.novalocal sshd-session[1173]: Disconnected from invalid user ftpuser 217.154.42.86 port 39708 [preauth]
Oct 01 12:06:37 np0005464214.novalocal sshd-session[1176]: Received disconnect from 49.49.32.245 port 39584:11: Bye Bye [preauth]
Oct 01 12:06:37 np0005464214.novalocal sshd-session[1176]: Disconnected from authenticating user root 49.49.32.245 port 39584 [preauth]
Oct 01 12:06:53 np0005464214.novalocal sshd-session[1178]: Received disconnect from 121.142.87.218 port 34690:11: Bye Bye [preauth]
Oct 01 12:06:53 np0005464214.novalocal sshd-session[1178]: Disconnected from authenticating user root 121.142.87.218 port 34690 [preauth]
Oct 01 12:07:10 np0005464214.novalocal sshd-session[1180]: Invalid user seekcy from 217.154.42.86 port 60178
Oct 01 12:07:10 np0005464214.novalocal sshd-session[1180]: Received disconnect from 217.154.42.86 port 60178:11: Bye Bye [preauth]
Oct 01 12:07:10 np0005464214.novalocal sshd-session[1180]: Disconnected from invalid user seekcy 217.154.42.86 port 60178 [preauth]
Oct 01 12:07:14 np0005464214.novalocal sshd-session[1182]: Invalid user test from 175.126.166.172 port 50144
Oct 01 12:07:14 np0005464214.novalocal sshd-session[1182]: Received disconnect from 175.126.166.172 port 50144:11: Bye Bye [preauth]
Oct 01 12:07:14 np0005464214.novalocal sshd-session[1182]: Disconnected from invalid user test 175.126.166.172 port 50144 [preauth]
Oct 01 12:07:45 np0005464214.novalocal sshd-session[1185]: Invalid user eva from 49.49.32.245 port 34742
Oct 01 12:07:46 np0005464214.novalocal sshd-session[1185]: Received disconnect from 49.49.32.245 port 34742:11: Bye Bye [preauth]
Oct 01 12:07:46 np0005464214.novalocal sshd-session[1185]: Disconnected from invalid user eva 49.49.32.245 port 34742 [preauth]
Oct 01 12:07:56 np0005464214.novalocal sshd-session[1187]: Invalid user lab from 185.156.73.233 port 19038
Oct 01 12:07:56 np0005464214.novalocal sshd-session[1187]: Connection closed by invalid user lab 185.156.73.233 port 19038 [preauth]
Oct 01 12:08:02 np0005464214.novalocal sshd-session[1189]: Invalid user ubuntu from 217.154.42.86 port 40210
Oct 01 12:08:02 np0005464214.novalocal sshd-session[1189]: Received disconnect from 217.154.42.86 port 40210:11: Bye Bye [preauth]
Oct 01 12:08:02 np0005464214.novalocal sshd-session[1189]: Disconnected from invalid user ubuntu 217.154.42.86 port 40210 [preauth]
Oct 01 12:08:04 np0005464214.novalocal sshd-session[1191]: Invalid user test from 121.142.87.218 port 57766
Oct 01 12:08:04 np0005464214.novalocal sshd-session[1191]: Received disconnect from 121.142.87.218 port 57766:11: Bye Bye [preauth]
Oct 01 12:08:04 np0005464214.novalocal sshd-session[1191]: Disconnected from invalid user test 121.142.87.218 port 57766 [preauth]
Oct 01 12:08:26 np0005464214.novalocal sshd-session[1193]: Invalid user giorgio from 175.126.166.172 port 43496
Oct 01 12:08:26 np0005464214.novalocal sshd-session[1193]: Received disconnect from 175.126.166.172 port 43496:11: Bye Bye [preauth]
Oct 01 12:08:26 np0005464214.novalocal sshd-session[1193]: Disconnected from invalid user giorgio 175.126.166.172 port 43496 [preauth]
Oct 01 12:08:54 np0005464214.novalocal sshd-session[1195]: Received disconnect from 217.154.42.86 port 40410:11: Bye Bye [preauth]
Oct 01 12:08:55 np0005464214.novalocal sshd-session[1195]: Disconnected from authenticating user root 217.154.42.86 port 40410 [preauth]
Oct 01 12:08:56 np0005464214.novalocal sshd-session[1197]: Received disconnect from 49.49.32.245 port 58134:11: Bye Bye [preauth]
Oct 01 12:08:56 np0005464214.novalocal sshd-session[1197]: Disconnected from authenticating user root 49.49.32.245 port 58134 [preauth]
Oct 01 12:09:19 np0005464214.novalocal sshd-session[1199]: Invalid user majid from 121.142.87.218 port 52606
Oct 01 12:09:19 np0005464214.novalocal sshd-session[1199]: Received disconnect from 121.142.87.218 port 52606:11: Bye Bye [preauth]
Oct 01 12:09:19 np0005464214.novalocal sshd-session[1199]: Disconnected from invalid user majid 121.142.87.218 port 52606 [preauth]
Oct 01 12:09:40 np0005464214.novalocal sshd-session[1201]: Invalid user seekcy from 175.126.166.172 port 35710
Oct 01 12:09:40 np0005464214.novalocal sshd-session[1201]: Received disconnect from 175.126.166.172 port 35710:11: Bye Bye [preauth]
Oct 01 12:09:40 np0005464214.novalocal sshd-session[1201]: Disconnected from invalid user seekcy 175.126.166.172 port 35710 [preauth]
Oct 01 12:09:51 np0005464214.novalocal sshd-session[1203]: Invalid user oleg from 217.154.42.86 port 50664
Oct 01 12:09:52 np0005464214.novalocal sshd-session[1203]: Received disconnect from 217.154.42.86 port 50664:11: Bye Bye [preauth]
Oct 01 12:09:52 np0005464214.novalocal sshd-session[1203]: Disconnected from invalid user oleg 217.154.42.86 port 50664 [preauth]
Oct 01 12:10:10 np0005464214.novalocal sshd-session[1205]: Invalid user seekcy from 49.49.32.245 port 53302
Oct 01 12:10:11 np0005464214.novalocal sshd-session[1205]: Received disconnect from 49.49.32.245 port 53302:11: Bye Bye [preauth]
Oct 01 12:10:11 np0005464214.novalocal sshd-session[1205]: Disconnected from invalid user seekcy 49.49.32.245 port 53302 [preauth]
Oct 01 12:10:33 np0005464214.novalocal sshd-session[1207]: Invalid user xxt from 121.142.87.218 port 47436
Oct 01 12:10:33 np0005464214.novalocal sshd-session[1207]: Received disconnect from 121.142.87.218 port 47436:11: Bye Bye [preauth]
Oct 01 12:10:33 np0005464214.novalocal sshd-session[1207]: Disconnected from invalid user xxt 121.142.87.218 port 47436 [preauth]
Oct 01 12:10:46 np0005464214.novalocal sshd-session[1209]: Invalid user oleg from 45.249.247.86 port 57254
Oct 01 12:10:46 np0005464214.novalocal sshd-session[1209]: Received disconnect from 45.249.247.86 port 57254:11: Bye Bye [preauth]
Oct 01 12:10:46 np0005464214.novalocal sshd-session[1209]: Disconnected from invalid user oleg 45.249.247.86 port 57254 [preauth]
Oct 01 12:10:47 np0005464214.novalocal sshd-session[1211]: Invalid user webdev from 217.154.42.86 port 38962
Oct 01 12:10:47 np0005464214.novalocal sshd-session[1211]: Received disconnect from 217.154.42.86 port 38962:11: Bye Bye [preauth]
Oct 01 12:10:47 np0005464214.novalocal sshd-session[1211]: Disconnected from invalid user webdev 217.154.42.86 port 38962 [preauth]
Oct 01 12:10:56 np0005464214.novalocal sshd-session[1213]: Invalid user mapr from 175.126.166.172 port 39982
Oct 01 12:10:56 np0005464214.novalocal sshd-session[1213]: Received disconnect from 175.126.166.172 port 39982:11: Bye Bye [preauth]
Oct 01 12:10:56 np0005464214.novalocal sshd-session[1213]: Disconnected from invalid user mapr 175.126.166.172 port 39982 [preauth]
Oct 01 12:11:23 np0005464214.novalocal sshd-session[1216]: Invalid user oleg from 49.49.32.245 port 48466
Oct 01 12:11:23 np0005464214.novalocal sshd-session[1216]: Received disconnect from 49.49.32.245 port 48466:11: Bye Bye [preauth]
Oct 01 12:11:23 np0005464214.novalocal sshd-session[1216]: Disconnected from invalid user oleg 49.49.32.245 port 48466 [preauth]
Oct 01 12:11:44 np0005464214.novalocal sshd-session[1218]: Received disconnect from 217.154.42.86 port 43266:11: Bye Bye [preauth]
Oct 01 12:11:44 np0005464214.novalocal sshd-session[1218]: Disconnected from authenticating user root 217.154.42.86 port 43266 [preauth]
Oct 01 12:11:49 np0005464214.novalocal sshd-session[1220]: Received disconnect from 121.142.87.218 port 42276:11: Bye Bye [preauth]
Oct 01 12:11:49 np0005464214.novalocal sshd-session[1220]: Disconnected from authenticating user root 121.142.87.218 port 42276 [preauth]
Oct 01 12:12:14 np0005464214.novalocal sshd-session[1223]: Invalid user webdev from 175.126.166.172 port 54718
Oct 01 12:12:14 np0005464214.novalocal sshd-session[1223]: Received disconnect from 175.126.166.172 port 54718:11: Bye Bye [preauth]
Oct 01 12:12:14 np0005464214.novalocal sshd-session[1223]: Disconnected from invalid user webdev 175.126.166.172 port 54718 [preauth]
Oct 01 12:12:23 np0005464214.novalocal sshd-session[1225]: Invalid user webdev from 45.249.247.86 port 57576
Oct 01 12:12:23 np0005464214.novalocal sshd-session[1225]: Received disconnect from 45.249.247.86 port 57576:11: Bye Bye [preauth]
Oct 01 12:12:23 np0005464214.novalocal sshd-session[1225]: Disconnected from invalid user webdev 45.249.247.86 port 57576 [preauth]
Oct 01 12:12:34 np0005464214.novalocal sshd-session[1227]: Invalid user rk from 49.49.32.245 port 43626
Oct 01 12:12:35 np0005464214.novalocal sshd-session[1227]: Received disconnect from 49.49.32.245 port 43626:11: Bye Bye [preauth]
Oct 01 12:12:35 np0005464214.novalocal sshd-session[1227]: Disconnected from invalid user rk 49.49.32.245 port 43626 [preauth]
Oct 01 12:12:40 np0005464214.novalocal sshd-session[1229]: Invalid user ramesh from 217.154.42.86 port 44958
Oct 01 12:12:40 np0005464214.novalocal sshd-session[1229]: Received disconnect from 217.154.42.86 port 44958:11: Bye Bye [preauth]
Oct 01 12:12:40 np0005464214.novalocal sshd-session[1229]: Disconnected from invalid user ramesh 217.154.42.86 port 44958 [preauth]
Oct 01 12:13:05 np0005464214.novalocal sshd-session[1232]: Received disconnect from 121.142.87.218 port 37134:11: Bye Bye [preauth]
Oct 01 12:13:05 np0005464214.novalocal sshd-session[1232]: Disconnected from authenticating user root 121.142.87.218 port 37134 [preauth]
Oct 01 12:13:32 np0005464214.novalocal sshd-session[1234]: Received disconnect from 175.126.166.172 port 56330:11: Bye Bye [preauth]
Oct 01 12:13:32 np0005464214.novalocal sshd-session[1234]: Disconnected from authenticating user root 175.126.166.172 port 56330 [preauth]
Oct 01 12:13:32 np0005464214.novalocal sshd-session[1236]: Invalid user seekcy from 217.154.42.86 port 44638
Oct 01 12:13:32 np0005464214.novalocal sshd-session[1236]: Received disconnect from 217.154.42.86 port 44638:11: Bye Bye [preauth]
Oct 01 12:13:32 np0005464214.novalocal sshd-session[1236]: Disconnected from invalid user seekcy 217.154.42.86 port 44638 [preauth]
Oct 01 12:13:43 np0005464214.novalocal sshd-session[1238]: Received disconnect from 49.49.32.245 port 38790:11: Bye Bye [preauth]
Oct 01 12:13:43 np0005464214.novalocal sshd-session[1238]: Disconnected from authenticating user root 49.49.32.245 port 38790 [preauth]
Oct 01 12:13:55 np0005464214.novalocal sshd-session[1240]: Received disconnect from 45.249.247.86 port 52712:11: Bye Bye [preauth]
Oct 01 12:13:55 np0005464214.novalocal sshd-session[1240]: Disconnected from authenticating user root 45.249.247.86 port 52712 [preauth]
Oct 01 12:14:02 np0005464214.novalocal anacron[1137]: Job `cron.daily' started
Oct 01 12:14:02 np0005464214.novalocal anacron[1137]: Job `cron.daily' terminated
Oct 01 12:14:20 np0005464214.novalocal sshd-session[1244]: Invalid user delphi from 121.142.87.218 port 60222
Oct 01 12:14:20 np0005464214.novalocal sshd-session[1244]: Received disconnect from 121.142.87.218 port 60222:11: Bye Bye [preauth]
Oct 01 12:14:20 np0005464214.novalocal sshd-session[1244]: Disconnected from invalid user delphi 121.142.87.218 port 60222 [preauth]
Oct 01 12:14:24 np0005464214.novalocal sshd-session[1246]: Invalid user dci from 217.154.42.86 port 57740
Oct 01 12:14:24 np0005464214.novalocal sshd-session[1246]: Received disconnect from 217.154.42.86 port 57740:11: Bye Bye [preauth]
Oct 01 12:14:24 np0005464214.novalocal sshd-session[1246]: Disconnected from invalid user dci 217.154.42.86 port 57740 [preauth]
Oct 01 12:14:46 np0005464214.novalocal sshd-session[1249]: Invalid user ftpuser from 175.126.166.172 port 39404
Oct 01 12:14:46 np0005464214.novalocal sshd-session[1249]: Received disconnect from 175.126.166.172 port 39404:11: Bye Bye [preauth]
Oct 01 12:14:46 np0005464214.novalocal sshd-session[1249]: Disconnected from invalid user ftpuser 175.126.166.172 port 39404 [preauth]
Oct 01 12:14:58 np0005464214.novalocal sshd-session[1251]: Received disconnect from 49.49.32.245 port 33956:11: Bye Bye [preauth]
Oct 01 12:14:58 np0005464214.novalocal sshd-session[1251]: Disconnected from authenticating user root 49.49.32.245 port 33956 [preauth]
Oct 01 12:15:18 np0005464214.novalocal sshd-session[1253]: Invalid user test from 217.154.42.86 port 50270
Oct 01 12:15:18 np0005464214.novalocal sshd-session[1253]: Received disconnect from 217.154.42.86 port 50270:11: Bye Bye [preauth]
Oct 01 12:15:18 np0005464214.novalocal sshd-session[1253]: Disconnected from invalid user test 217.154.42.86 port 50270 [preauth]
Oct 01 12:15:27 np0005464214.novalocal sshd-session[1255]: Invalid user pavan from 45.249.247.86 port 49104
Oct 01 12:15:27 np0005464214.novalocal sshd-session[1255]: Received disconnect from 45.249.247.86 port 49104:11: Bye Bye [preauth]
Oct 01 12:15:27 np0005464214.novalocal sshd-session[1255]: Disconnected from invalid user pavan 45.249.247.86 port 49104 [preauth]
Oct 01 12:15:35 np0005464214.novalocal sshd-session[1257]: Invalid user postgres from 121.142.87.218 port 55054
Oct 01 12:15:36 np0005464214.novalocal sshd-session[1257]: Received disconnect from 121.142.87.218 port 55054:11: Bye Bye [preauth]
Oct 01 12:15:36 np0005464214.novalocal sshd-session[1257]: Disconnected from invalid user postgres 121.142.87.218 port 55054 [preauth]
Oct 01 12:16:04 np0005464214.novalocal sshd-session[1259]: Received disconnect from 175.126.166.172 port 40852:11: Bye Bye [preauth]
Oct 01 12:16:04 np0005464214.novalocal sshd-session[1259]: Disconnected from authenticating user root 175.126.166.172 port 40852 [preauth]
Oct 01 12:16:15 np0005464214.novalocal sshd-session[1261]: Invalid user mapr from 49.49.32.245 port 57348
Oct 01 12:16:15 np0005464214.novalocal sshd-session[1261]: Received disconnect from 49.49.32.245 port 57348:11: Bye Bye [preauth]
Oct 01 12:16:15 np0005464214.novalocal sshd-session[1261]: Disconnected from invalid user mapr 49.49.32.245 port 57348 [preauth]
Oct 01 12:16:16 np0005464214.novalocal sshd-session[1263]: Received disconnect from 217.154.42.86 port 39140:11: Bye Bye [preauth]
Oct 01 12:16:16 np0005464214.novalocal sshd-session[1263]: Disconnected from authenticating user root 217.154.42.86 port 39140 [preauth]
Oct 01 12:16:51 np0005464214.novalocal sshd-session[1265]: Invalid user dima from 121.142.87.218 port 49888
Oct 01 12:16:52 np0005464214.novalocal sshd-session[1265]: Received disconnect from 121.142.87.218 port 49888:11: Bye Bye [preauth]
Oct 01 12:16:52 np0005464214.novalocal sshd-session[1265]: Disconnected from invalid user dima 121.142.87.218 port 49888 [preauth]
Oct 01 12:17:00 np0005464214.novalocal sshd-session[1267]: Invalid user ramesh from 45.249.247.86 port 45658
Oct 01 12:17:00 np0005464214.novalocal sshd-session[1267]: Received disconnect from 45.249.247.86 port 45658:11: Bye Bye [preauth]
Oct 01 12:17:00 np0005464214.novalocal sshd-session[1267]: Disconnected from invalid user ramesh 45.249.247.86 port 45658 [preauth]
Oct 01 12:17:20 np0005464214.novalocal sshd-session[1269]: Invalid user rk from 217.154.42.86 port 55800
Oct 01 12:17:20 np0005464214.novalocal sshd-session[1269]: Received disconnect from 217.154.42.86 port 55800:11: Bye Bye [preauth]
Oct 01 12:17:20 np0005464214.novalocal sshd-session[1269]: Disconnected from invalid user rk 217.154.42.86 port 55800 [preauth]
Oct 01 12:17:24 np0005464214.novalocal sshd-session[1271]: Invalid user seekcy from 175.126.166.172 port 55660
Oct 01 12:17:24 np0005464214.novalocal sshd-session[1271]: Received disconnect from 175.126.166.172 port 55660:11: Bye Bye [preauth]
Oct 01 12:17:24 np0005464214.novalocal sshd-session[1271]: Disconnected from invalid user seekcy 175.126.166.172 port 55660 [preauth]
Oct 01 12:17:36 np0005464214.novalocal sshd-session[1274]: Invalid user pavan from 49.49.32.245 port 52518
Oct 01 12:17:36 np0005464214.novalocal sshd-session[1274]: Received disconnect from 49.49.32.245 port 52518:11: Bye Bye [preauth]
Oct 01 12:17:36 np0005464214.novalocal sshd-session[1274]: Disconnected from invalid user pavan 49.49.32.245 port 52518 [preauth]
Oct 01 12:18:13 np0005464214.novalocal sshd-session[1276]: Received disconnect from 121.142.87.218 port 44724:11: Bye Bye [preauth]
Oct 01 12:18:13 np0005464214.novalocal sshd-session[1276]: Disconnected from authenticating user root 121.142.87.218 port 44724 [preauth]
Oct 01 12:18:17 np0005464214.novalocal sshd-session[1278]: Invalid user rootftp from 217.154.42.86 port 34584
Oct 01 12:18:17 np0005464214.novalocal sshd-session[1278]: Received disconnect from 217.154.42.86 port 34584:11: Bye Bye [preauth]
Oct 01 12:18:17 np0005464214.novalocal sshd-session[1278]: Disconnected from invalid user rootftp 217.154.42.86 port 34584 [preauth]
Oct 01 12:18:37 np0005464214.novalocal sshd-session[1281]: Invalid user teste from 45.249.247.86 port 35160
Oct 01 12:18:38 np0005464214.novalocal sshd-session[1281]: Received disconnect from 45.249.247.86 port 35160:11: Bye Bye [preauth]
Oct 01 12:18:38 np0005464214.novalocal sshd-session[1281]: Disconnected from invalid user teste 45.249.247.86 port 35160 [preauth]
Oct 01 12:18:39 np0005464214.novalocal sshd-session[1283]: Invalid user rr from 175.126.166.172 port 33770
Oct 01 12:18:40 np0005464214.novalocal sshd-session[1283]: Received disconnect from 175.126.166.172 port 33770:11: Bye Bye [preauth]
Oct 01 12:18:40 np0005464214.novalocal sshd-session[1283]: Disconnected from invalid user rr 175.126.166.172 port 33770 [preauth]
Oct 01 12:18:56 np0005464214.novalocal sshd-session[1285]: Invalid user seekcy from 49.49.32.245 port 47680
Oct 01 12:18:56 np0005464214.novalocal sshd-session[1285]: Received disconnect from 49.49.32.245 port 47680:11: Bye Bye [preauth]
Oct 01 12:18:56 np0005464214.novalocal sshd-session[1285]: Disconnected from invalid user seekcy 49.49.32.245 port 47680 [preauth]
Oct 01 12:19:12 np0005464214.novalocal sshd-session[1287]: Invalid user admin from 217.154.42.86 port 53746
Oct 01 12:19:12 np0005464214.novalocal sshd-session[1287]: Received disconnect from 217.154.42.86 port 53746:11: Bye Bye [preauth]
Oct 01 12:19:12 np0005464214.novalocal sshd-session[1287]: Disconnected from invalid user admin 217.154.42.86 port 53746 [preauth]
Oct 01 12:19:28 np0005464214.novalocal sshd-session[1290]: Invalid user bot1 from 121.142.87.218 port 39576
Oct 01 12:19:29 np0005464214.novalocal sshd-session[1290]: Received disconnect from 121.142.87.218 port 39576:11: Bye Bye [preauth]
Oct 01 12:19:29 np0005464214.novalocal sshd-session[1290]: Disconnected from invalid user bot1 121.142.87.218 port 39576 [preauth]
Oct 01 12:19:54 np0005464214.novalocal sshd-session[1292]: Invalid user s1 from 175.126.166.172 port 44942
Oct 01 12:19:55 np0005464214.novalocal sshd-session[1292]: Received disconnect from 175.126.166.172 port 44942:11: Bye Bye [preauth]
Oct 01 12:19:55 np0005464214.novalocal sshd-session[1292]: Disconnected from invalid user s1 175.126.166.172 port 44942 [preauth]
Oct 01 12:20:06 np0005464214.novalocal sshd-session[1294]: Invalid user teste from 217.154.42.86 port 41324
Oct 01 12:20:06 np0005464214.novalocal sshd-session[1294]: Received disconnect from 217.154.42.86 port 41324:11: Bye Bye [preauth]
Oct 01 12:20:06 np0005464214.novalocal sshd-session[1294]: Disconnected from invalid user teste 217.154.42.86 port 41324 [preauth]
Oct 01 12:20:14 np0005464214.novalocal sshd-session[1296]: Invalid user seekcy from 49.49.32.245 port 42842
Oct 01 12:20:14 np0005464214.novalocal sshd-session[1296]: Received disconnect from 49.49.32.245 port 42842:11: Bye Bye [preauth]
Oct 01 12:20:14 np0005464214.novalocal sshd-session[1296]: Disconnected from invalid user seekcy 49.49.32.245 port 42842 [preauth]
Oct 01 12:20:30 np0005464214.novalocal sshd-session[1298]: Invalid user admin from 185.156.73.233 port 48186
Oct 01 12:20:30 np0005464214.novalocal sshd-session[1298]: Connection closed by invalid user admin 185.156.73.233 port 48186 [preauth]
Oct 01 12:20:42 np0005464214.novalocal sshd-session[1300]: Received disconnect from 121.142.87.218 port 34406:11: Bye Bye [preauth]
Oct 01 12:20:42 np0005464214.novalocal sshd-session[1300]: Disconnected from authenticating user root 121.142.87.218 port 34406 [preauth]
Oct 01 12:20:58 np0005464214.novalocal sshd-session[1302]: Invalid user steam1 from 217.154.42.86 port 47458
Oct 01 12:20:58 np0005464214.novalocal sshd-session[1302]: Received disconnect from 217.154.42.86 port 47458:11: Bye Bye [preauth]
Oct 01 12:20:58 np0005464214.novalocal sshd-session[1302]: Disconnected from invalid user steam1 217.154.42.86 port 47458 [preauth]
Oct 01 12:21:10 np0005464214.novalocal sshd-session[1304]: Invalid user nb from 175.126.166.172 port 53006
Oct 01 12:21:10 np0005464214.novalocal sshd-session[1304]: Received disconnect from 175.126.166.172 port 53006:11: Bye Bye [preauth]
Oct 01 12:21:10 np0005464214.novalocal sshd-session[1304]: Disconnected from invalid user nb 175.126.166.172 port 53006 [preauth]
Oct 01 12:21:31 np0005464214.novalocal sshd-session[1306]: Invalid user s1 from 49.49.32.245 port 38010
Oct 01 12:21:31 np0005464214.novalocal sshd-session[1306]: Received disconnect from 49.49.32.245 port 38010:11: Bye Bye [preauth]
Oct 01 12:21:31 np0005464214.novalocal sshd-session[1306]: Disconnected from invalid user s1 49.49.32.245 port 38010 [preauth]
Oct 01 12:21:51 np0005464214.novalocal sshd-session[1309]: Invalid user jason1 from 217.154.42.86 port 35354
Oct 01 12:21:51 np0005464214.novalocal sshd-session[1309]: Received disconnect from 217.154.42.86 port 35354:11: Bye Bye [preauth]
Oct 01 12:21:51 np0005464214.novalocal sshd-session[1309]: Disconnected from invalid user jason1 217.154.42.86 port 35354 [preauth]
Oct 01 12:21:56 np0005464214.novalocal sshd-session[1311]: Invalid user confluence from 121.142.87.218 port 57488
Oct 01 12:21:57 np0005464214.novalocal sshd-session[1311]: Received disconnect from 121.142.87.218 port 57488:11: Bye Bye [preauth]
Oct 01 12:21:57 np0005464214.novalocal sshd-session[1311]: Disconnected from invalid user confluence 121.142.87.218 port 57488 [preauth]
Oct 01 12:22:25 np0005464214.novalocal sshd-session[1313]: Invalid user seekcy from 175.126.166.172 port 54922
Oct 01 12:22:25 np0005464214.novalocal sshd-session[1313]: Received disconnect from 175.126.166.172 port 54922:11: Bye Bye [preauth]
Oct 01 12:22:25 np0005464214.novalocal sshd-session[1313]: Disconnected from invalid user seekcy 175.126.166.172 port 54922 [preauth]
Oct 01 12:22:46 np0005464214.novalocal sshd-session[1315]: Invalid user seekcy from 217.154.42.86 port 39014
Oct 01 12:22:46 np0005464214.novalocal sshd-session[1315]: Received disconnect from 217.154.42.86 port 39014:11: Bye Bye [preauth]
Oct 01 12:22:46 np0005464214.novalocal sshd-session[1315]: Disconnected from invalid user seekcy 217.154.42.86 port 39014 [preauth]
Oct 01 12:22:47 np0005464214.novalocal sshd-session[1317]: Invalid user giorgio from 49.49.32.245 port 33178
Oct 01 12:22:48 np0005464214.novalocal sshd-session[1317]: Received disconnect from 49.49.32.245 port 33178:11: Bye Bye [preauth]
Oct 01 12:22:48 np0005464214.novalocal sshd-session[1317]: Disconnected from invalid user giorgio 49.49.32.245 port 33178 [preauth]
Oct 01 12:23:12 np0005464214.novalocal sshd-session[1320]: Invalid user axway from 121.142.87.218 port 52320
Oct 01 12:23:13 np0005464214.novalocal sshd-session[1320]: Received disconnect from 121.142.87.218 port 52320:11: Bye Bye [preauth]
Oct 01 12:23:13 np0005464214.novalocal sshd-session[1320]: Disconnected from invalid user axway 121.142.87.218 port 52320 [preauth]
Oct 01 12:23:42 np0005464214.novalocal sshd-session[1322]: Invalid user steam1 from 175.126.166.172 port 59292
Oct 01 12:23:42 np0005464214.novalocal sshd-session[1322]: Received disconnect from 175.126.166.172 port 59292:11: Bye Bye [preauth]
Oct 01 12:23:42 np0005464214.novalocal sshd-session[1322]: Disconnected from invalid user steam1 175.126.166.172 port 59292 [preauth]
Oct 01 12:23:42 np0005464214.novalocal sshd-session[1324]: Invalid user rr from 217.154.42.86 port 48394
Oct 01 12:23:43 np0005464214.novalocal sshd-session[1324]: Received disconnect from 217.154.42.86 port 48394:11: Bye Bye [preauth]
Oct 01 12:23:43 np0005464214.novalocal sshd-session[1324]: Disconnected from invalid user rr 217.154.42.86 port 48394 [preauth]
Oct 01 12:24:07 np0005464214.novalocal sshd-session[1327]: Invalid user fff from 49.49.32.245 port 56576
Oct 01 12:24:07 np0005464214.novalocal sshd-session[1327]: Received disconnect from 49.49.32.245 port 56576:11: Bye Bye [preauth]
Oct 01 12:24:07 np0005464214.novalocal sshd-session[1327]: Disconnected from invalid user fff 49.49.32.245 port 56576 [preauth]
Oct 01 12:24:29 np0005464214.novalocal sshd-session[1329]: Invalid user seekcy from 121.142.87.218 port 47158
Oct 01 12:24:30 np0005464214.novalocal sshd-session[1329]: Received disconnect from 121.142.87.218 port 47158:11: Bye Bye [preauth]
Oct 01 12:24:30 np0005464214.novalocal sshd-session[1329]: Disconnected from invalid user seekcy 121.142.87.218 port 47158 [preauth]
Oct 01 12:24:39 np0005464214.novalocal sshd-session[1331]: Invalid user seekcy from 217.154.42.86 port 58036
Oct 01 12:24:40 np0005464214.novalocal sshd-session[1331]: Received disconnect from 217.154.42.86 port 58036:11: Bye Bye [preauth]
Oct 01 12:24:40 np0005464214.novalocal sshd-session[1331]: Disconnected from invalid user seekcy 217.154.42.86 port 58036 [preauth]
Oct 01 12:24:58 np0005464214.novalocal sshd-session[1333]: Invalid user jason1 from 175.126.166.172 port 60314
Oct 01 12:24:58 np0005464214.novalocal sshd-session[1333]: Received disconnect from 175.126.166.172 port 60314:11: Bye Bye [preauth]
Oct 01 12:24:58 np0005464214.novalocal sshd-session[1333]: Disconnected from invalid user jason1 175.126.166.172 port 60314 [preauth]
Oct 01 12:25:01 np0005464214.novalocal sshd-session[1335]: Invalid user ubuntu from 45.249.247.86 port 39534
Oct 01 12:25:02 np0005464214.novalocal sshd-session[1335]: Received disconnect from 45.249.247.86 port 39534:11: Bye Bye [preauth]
Oct 01 12:25:02 np0005464214.novalocal sshd-session[1335]: Disconnected from invalid user ubuntu 45.249.247.86 port 39534 [preauth]
Oct 01 12:25:25 np0005464214.novalocal sshd-session[1338]: Invalid user rr from 49.49.32.245 port 51746
Oct 01 12:25:25 np0005464214.novalocal sshd-session[1338]: Received disconnect from 49.49.32.245 port 51746:11: Bye Bye [preauth]
Oct 01 12:25:25 np0005464214.novalocal sshd-session[1338]: Disconnected from invalid user rr 49.49.32.245 port 51746 [preauth]
Oct 01 12:25:32 np0005464214.novalocal sshd-session[1340]: Received disconnect from 217.154.42.86 port 49316:11: Bye Bye [preauth]
Oct 01 12:25:32 np0005464214.novalocal sshd-session[1340]: Disconnected from authenticating user root 217.154.42.86 port 49316 [preauth]
Oct 01 12:25:46 np0005464214.novalocal sshd-session[1342]: Invalid user seekcy from 121.142.87.218 port 41990
Oct 01 12:25:46 np0005464214.novalocal sshd-session[1342]: Received disconnect from 121.142.87.218 port 41990:11: Bye Bye [preauth]
Oct 01 12:25:46 np0005464214.novalocal sshd-session[1342]: Disconnected from invalid user seekcy 121.142.87.218 port 41990 [preauth]
Oct 01 12:26:12 np0005464214.novalocal sshd-session[1344]: Received disconnect from 175.126.166.172 port 53726:11: Bye Bye [preauth]
Oct 01 12:26:12 np0005464214.novalocal sshd-session[1344]: Disconnected from authenticating user root 175.126.166.172 port 53726 [preauth]
Oct 01 12:26:27 np0005464214.novalocal sshd-session[1346]: Invalid user giorgio from 217.154.42.86 port 42484
Oct 01 12:26:27 np0005464214.novalocal sshd-session[1346]: Received disconnect from 217.154.42.86 port 42484:11: Bye Bye [preauth]
Oct 01 12:26:27 np0005464214.novalocal sshd-session[1346]: Disconnected from invalid user giorgio 217.154.42.86 port 42484 [preauth]
Oct 01 12:26:39 np0005464214.novalocal sshd-session[1349]: Invalid user seekcy from 49.49.32.245 port 46902
Oct 01 12:26:39 np0005464214.novalocal sshd-session[1349]: Received disconnect from 49.49.32.245 port 46902:11: Bye Bye [preauth]
Oct 01 12:26:39 np0005464214.novalocal sshd-session[1349]: Disconnected from invalid user seekcy 49.49.32.245 port 46902 [preauth]
Oct 01 12:26:57 np0005464214.novalocal sshd-session[1351]: Invalid user seekcy from 121.142.87.218 port 36824
Oct 01 12:26:57 np0005464214.novalocal sshd-session[1351]: Received disconnect from 121.142.87.218 port 36824:11: Bye Bye [preauth]
Oct 01 12:26:57 np0005464214.novalocal sshd-session[1351]: Disconnected from invalid user seekcy 121.142.87.218 port 36824 [preauth]
Oct 01 12:27:20 np0005464214.novalocal sshd-session[1353]: Invalid user mapr from 217.154.42.86 port 51972
Oct 01 12:27:20 np0005464214.novalocal sshd-session[1353]: Received disconnect from 217.154.42.86 port 51972:11: Bye Bye [preauth]
Oct 01 12:27:20 np0005464214.novalocal sshd-session[1353]: Disconnected from invalid user mapr 217.154.42.86 port 51972 [preauth]
Oct 01 12:27:25 np0005464214.novalocal sshd-session[1355]: Invalid user eva from 175.126.166.172 port 35942
Oct 01 12:27:25 np0005464214.novalocal sshd-session[1355]: Received disconnect from 175.126.166.172 port 35942:11: Bye Bye [preauth]
Oct 01 12:27:25 np0005464214.novalocal sshd-session[1355]: Disconnected from invalid user eva 175.126.166.172 port 35942 [preauth]
Oct 01 12:27:57 np0005464214.novalocal sshd-session[1357]: Invalid user jason1 from 49.49.32.245 port 42058
Oct 01 12:27:57 np0005464214.novalocal sshd-session[1357]: Received disconnect from 49.49.32.245 port 42058:11: Bye Bye [preauth]
Oct 01 12:27:57 np0005464214.novalocal sshd-session[1357]: Disconnected from invalid user jason1 49.49.32.245 port 42058 [preauth]
Oct 01 12:28:05 np0005464214.novalocal sshd-session[1359]: Invalid user test from 45.249.247.86 port 47234
Oct 01 12:28:05 np0005464214.novalocal sshd-session[1359]: Received disconnect from 45.249.247.86 port 47234:11: Bye Bye [preauth]
Oct 01 12:28:05 np0005464214.novalocal sshd-session[1359]: Disconnected from invalid user test 45.249.247.86 port 47234 [preauth]
Oct 01 12:28:11 np0005464214.novalocal sshd-session[1361]: Received disconnect from 121.142.87.218 port 59890:11: Bye Bye [preauth]
Oct 01 12:28:11 np0005464214.novalocal sshd-session[1361]: Disconnected from authenticating user root 121.142.87.218 port 59890 [preauth]
Oct 01 12:28:14 np0005464214.novalocal sshd-session[1363]: Invalid user nb from 217.154.42.86 port 47910
Oct 01 12:28:14 np0005464214.novalocal sshd-session[1363]: Received disconnect from 217.154.42.86 port 47910:11: Bye Bye [preauth]
Oct 01 12:28:14 np0005464214.novalocal sshd-session[1363]: Disconnected from invalid user nb 217.154.42.86 port 47910 [preauth]
Oct 01 12:28:43 np0005464214.novalocal sshd-session[1366]: Invalid user oleg from 175.126.166.172 port 57946
Oct 01 12:28:43 np0005464214.novalocal sshd-session[1366]: Received disconnect from 175.126.166.172 port 57946:11: Bye Bye [preauth]
Oct 01 12:28:43 np0005464214.novalocal sshd-session[1366]: Disconnected from invalid user oleg 175.126.166.172 port 57946 [preauth]
Oct 01 12:29:11 np0005464214.novalocal sshd-session[1368]: Invalid user khoa from 217.154.42.86 port 53504
Oct 01 12:29:11 np0005464214.novalocal sshd-session[1368]: Received disconnect from 217.154.42.86 port 53504:11: Bye Bye [preauth]
Oct 01 12:29:11 np0005464214.novalocal sshd-session[1368]: Disconnected from invalid user khoa 217.154.42.86 port 53504 [preauth]
Oct 01 12:29:15 np0005464214.novalocal sshd-session[1370]: Invalid user rootftp from 49.49.32.245 port 37224
Oct 01 12:29:16 np0005464214.novalocal sshd-session[1370]: Received disconnect from 49.49.32.245 port 37224:11: Bye Bye [preauth]
Oct 01 12:29:16 np0005464214.novalocal sshd-session[1370]: Disconnected from invalid user rootftp 49.49.32.245 port 37224 [preauth]
Oct 01 12:29:30 np0005464214.novalocal sshd-session[1373]: Invalid user git from 121.142.87.218 port 54738
Oct 01 12:29:30 np0005464214.novalocal sshd-session[1373]: Received disconnect from 121.142.87.218 port 54738:11: Bye Bye [preauth]
Oct 01 12:29:30 np0005464214.novalocal sshd-session[1373]: Disconnected from invalid user git 121.142.87.218 port 54738 [preauth]
Oct 01 12:29:40 np0005464214.novalocal sshd-session[1375]: Invalid user mapr from 45.249.247.86 port 33638
Oct 01 12:29:41 np0005464214.novalocal sshd-session[1375]: Received disconnect from 45.249.247.86 port 33638:11: Bye Bye [preauth]
Oct 01 12:29:41 np0005464214.novalocal sshd-session[1375]: Disconnected from invalid user mapr 45.249.247.86 port 33638 [preauth]
Oct 01 12:30:02 np0005464214.novalocal sshd-session[1377]: Received disconnect from 175.126.166.172 port 45034:11: Bye Bye [preauth]
Oct 01 12:30:02 np0005464214.novalocal sshd-session[1377]: Disconnected from authenticating user root 175.126.166.172 port 45034 [preauth]
Oct 01 12:30:10 np0005464214.novalocal sshd-session[1379]: Invalid user fff from 217.154.42.86 port 49690
Oct 01 12:30:10 np0005464214.novalocal sshd-session[1379]: Received disconnect from 217.154.42.86 port 49690:11: Bye Bye [preauth]
Oct 01 12:30:10 np0005464214.novalocal sshd-session[1379]: Disconnected from invalid user fff 217.154.42.86 port 49690 [preauth]
Oct 01 12:30:42 np0005464214.novalocal sshd-session[1381]: Invalid user teste from 49.49.32.245 port 65358
Oct 01 12:30:42 np0005464214.novalocal sshd-session[1381]: Received disconnect from 49.49.32.245 port 65358:11: Bye Bye [preauth]
Oct 01 12:30:42 np0005464214.novalocal sshd-session[1381]: Disconnected from invalid user teste 49.49.32.245 port 65358 [preauth]
Oct 01 12:30:50 np0005464214.novalocal sshd-session[1383]: Invalid user use from 121.142.87.218 port 49576
Oct 01 12:30:50 np0005464214.novalocal sshd-session[1383]: Received disconnect from 121.142.87.218 port 49576:11: Bye Bye [preauth]
Oct 01 12:30:50 np0005464214.novalocal sshd-session[1383]: Disconnected from invalid user use 121.142.87.218 port 49576 [preauth]
Oct 01 12:31:09 np0005464214.novalocal sshd-session[1385]: Received disconnect from 217.154.42.86 port 33344:11: Bye Bye [preauth]
Oct 01 12:31:09 np0005464214.novalocal sshd-session[1385]: Disconnected from authenticating user root 217.154.42.86 port 33344 [preauth]
Oct 01 12:31:15 np0005464214.novalocal sshd-session[1387]: Invalid user grafana from 45.249.247.86 port 34608
Oct 01 12:31:15 np0005464214.novalocal sshd-session[1387]: Received disconnect from 45.249.247.86 port 34608:11: Bye Bye [preauth]
Oct 01 12:31:15 np0005464214.novalocal sshd-session[1387]: Disconnected from invalid user grafana 45.249.247.86 port 34608 [preauth]
Oct 01 12:31:18 np0005464214.novalocal sshd-session[1389]: Invalid user dci from 175.126.166.172 port 57560
Oct 01 12:31:18 np0005464214.novalocal sshd-session[1389]: Received disconnect from 175.126.166.172 port 57560:11: Bye Bye [preauth]
Oct 01 12:31:18 np0005464214.novalocal sshd-session[1389]: Disconnected from invalid user dci 175.126.166.172 port 57560 [preauth]
Oct 01 12:32:03 np0005464214.novalocal sshd-session[1391]: Invalid user ubuntu from 49.49.32.245 port 55796
Oct 01 12:32:03 np0005464214.novalocal sshd-session[1391]: Received disconnect from 49.49.32.245 port 55796:11: Bye Bye [preauth]
Oct 01 12:32:03 np0005464214.novalocal sshd-session[1391]: Disconnected from invalid user ubuntu 49.49.32.245 port 55796 [preauth]
Oct 01 12:32:04 np0005464214.novalocal sshd-session[1395]: Invalid user s1 from 217.154.42.86 port 37692
Oct 01 12:32:05 np0005464214.novalocal sshd-session[1395]: Received disconnect from 217.154.42.86 port 37692:11: Bye Bye [preauth]
Oct 01 12:32:05 np0005464214.novalocal sshd-session[1395]: Disconnected from invalid user s1 217.154.42.86 port 37692 [preauth]
Oct 01 12:32:05 np0005464214.novalocal sshd-session[1393]: Invalid user farmacia from 121.142.87.218 port 44406
Oct 01 12:32:05 np0005464214.novalocal sshd-session[1393]: Received disconnect from 121.142.87.218 port 44406:11: Bye Bye [preauth]
Oct 01 12:32:05 np0005464214.novalocal sshd-session[1393]: Disconnected from invalid user farmacia 121.142.87.218 port 44406 [preauth]
Oct 01 12:32:35 np0005464214.novalocal sshd-session[1397]: Invalid user rk from 175.126.166.172 port 58702
Oct 01 12:32:35 np0005464214.novalocal sshd-session[1397]: Received disconnect from 175.126.166.172 port 58702:11: Bye Bye [preauth]
Oct 01 12:32:35 np0005464214.novalocal sshd-session[1397]: Disconnected from invalid user rk 175.126.166.172 port 58702 [preauth]
Oct 01 12:32:52 np0005464214.novalocal sshd-session[1399]: Received disconnect from 45.249.247.86 port 56370:11: Bye Bye [preauth]
Oct 01 12:32:52 np0005464214.novalocal sshd-session[1399]: Disconnected from authenticating user root 45.249.247.86 port 56370 [preauth]
Oct 01 12:32:59 np0005464214.novalocal sshd-session[1401]: Invalid user seekcy from 217.154.42.86 port 35244
Oct 01 12:32:59 np0005464214.novalocal sshd-session[1401]: Received disconnect from 217.154.42.86 port 35244:11: Bye Bye [preauth]
Oct 01 12:32:59 np0005464214.novalocal sshd-session[1401]: Disconnected from invalid user seekcy 217.154.42.86 port 35244 [preauth]
Oct 01 12:33:04 np0005464214.novalocal sshd-session[1403]: Invalid user admin from 185.156.73.233 port 41482
Oct 01 12:33:04 np0005464214.novalocal sshd-session[1403]: Connection closed by invalid user admin 185.156.73.233 port 41482 [preauth]
Oct 01 12:33:18 np0005464214.novalocal sshd-session[1405]: Invalid user uno50 from 121.142.87.218 port 39240
Oct 01 12:33:18 np0005464214.novalocal sshd-session[1405]: Received disconnect from 121.142.87.218 port 39240:11: Bye Bye [preauth]
Oct 01 12:33:18 np0005464214.novalocal sshd-session[1405]: Disconnected from invalid user uno50 121.142.87.218 port 39240 [preauth]
Oct 01 12:33:26 np0005464214.novalocal sshd-session[1407]: Invalid user grafana from 49.49.32.245 port 50946
Oct 01 12:33:26 np0005464214.novalocal sshd-session[1407]: Received disconnect from 49.49.32.245 port 50946:11: Bye Bye [preauth]
Oct 01 12:33:26 np0005464214.novalocal sshd-session[1407]: Disconnected from invalid user grafana 49.49.32.245 port 50946 [preauth]
Oct 01 12:33:36 np0005464214.novalocal sshd-session[1410]: banner exchange: Connection from 143.198.64.205 port 53082: invalid format
Oct 01 12:33:51 np0005464214.novalocal sshd-session[1412]: Invalid user rootftp from 175.126.166.172 port 33218
Oct 01 12:33:51 np0005464214.novalocal sshd-session[1412]: Received disconnect from 175.126.166.172 port 33218:11: Bye Bye [preauth]
Oct 01 12:33:51 np0005464214.novalocal sshd-session[1412]: Disconnected from invalid user rootftp 175.126.166.172 port 33218 [preauth]
Oct 01 12:33:53 np0005464214.novalocal sshd-session[1415]: Received disconnect from 217.154.42.86 port 35376:11: Bye Bye [preauth]
Oct 01 12:33:53 np0005464214.novalocal sshd-session[1415]: Disconnected from authenticating user root 217.154.42.86 port 35376 [preauth]
Oct 01 12:34:02 np0005464214.novalocal anacron[1137]: Job `cron.weekly' started
Oct 01 12:34:02 np0005464214.novalocal anacron[1137]: Job `cron.weekly' terminated
Oct 01 12:34:11 np0005464214.novalocal sshd-session[1419]: Accepted publickey for zuul from 38.102.83.114 port 55192 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 01 12:34:11 np0005464214.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 01 12:34:11 np0005464214.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 01 12:34:11 np0005464214.novalocal systemd-logind[818]: New session 1 of user zuul.
Oct 01 12:34:11 np0005464214.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 01 12:34:11 np0005464214.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Queued start job for default target Main User Target.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Created slice User Application Slice.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Reached target Paths.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Reached target Timers.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Starting D-Bus User Message Bus Socket...
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Starting Create User's Volatile Files and Directories...
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Listening on D-Bus User Message Bus Socket.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Finished Create User's Volatile Files and Directories.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Reached target Sockets.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Reached target Basic System.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Reached target Main User Target.
Oct 01 12:34:11 np0005464214.novalocal systemd[1423]: Startup finished in 153ms.
Oct 01 12:34:11 np0005464214.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 01 12:34:11 np0005464214.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 01 12:34:11 np0005464214.novalocal sshd-session[1419]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 12:34:11 np0005464214.novalocal python3[1507]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:34:14 np0005464214.novalocal python3[1535]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:34:20 np0005464214.novalocal python3[1593]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:34:21 np0005464214.novalocal python3[1635]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 01 12:34:21 np0005464214.novalocal sshd-session[1594]: Invalid user jason1 from 45.249.247.86 port 59722
Oct 01 12:34:21 np0005464214.novalocal sshd-session[1594]: Received disconnect from 45.249.247.86 port 59722:11: Bye Bye [preauth]
Oct 01 12:34:21 np0005464214.novalocal sshd-session[1594]: Disconnected from invalid user jason1 45.249.247.86 port 59722 [preauth]
Oct 01 12:34:23 np0005464214.novalocal python3[1661]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqYqivu2ogZ7lipmmiT9Ls2qUo8D5UWoisJfsIQ69aHSPwYxmf1rseaq0xAckbGKZk62qOZ4U8xHyFKyWuNOMzb//sbuv7hGSYQOzuXcOCy1OQ0NleH2CcjO9Z3DxZ4gOPVl2X951qNqZWS12QFAX6pf1kf9ZdDsap1Ec1wQTxL1cXcyLYTo7WrVDZA5hDsgezm0Mq9/H7HOG2q4IQ7/o7X5OyfGXJYhKOCc5zrID4IF0+y8WzkvbmCJ7JqtZP/nwS33jXuNdpg1Hsm3sRLc/ucxJ0eZzs5eJ00f5Jnbj9CqoDdCp6+9xN2j9nvjZkYjUextY6FF3N9r2V5xl2kXugl9dz4DA4vBoUi8BeWnh6thKtbOwB3KAUYpZnH6c/nFRjf1qmbrEwS7V2LiF51l9pfR4Z1HtnMG4xwQHvBNwSyL2YLCznEG5sfEmoDs0mMfcSuiSXOAiA8P2WeuiMmCT7jUkKO1UpmtqEJP9i4w1vEWqP1w+EGCdQtU7bS/bF0Rk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:23 np0005464214.novalocal python3[1685]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:24 np0005464214.novalocal python3[1784]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:34:24 np0005464214.novalocal python3[1855]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759322064.061439-207-28835398841381/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=c96eb7ceeb9c4787898270928c891f09_id_rsa follow=False checksum=89d74924afce1297a5600cbdc4812d29d3f07317 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:25 np0005464214.novalocal python3[1978]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:34:25 np0005464214.novalocal python3[2049]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759322065.038209-240-228341817513442/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=c96eb7ceeb9c4787898270928c891f09_id_rsa.pub follow=False checksum=75212e430220dfeb25fafa8dac3c0198acf09cda backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:27 np0005464214.novalocal python3[2098]: ansible-ping Invoked with data=pong
Oct 01 12:34:27 np0005464214.novalocal python3[2122]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:34:29 np0005464214.novalocal python3[2180]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 01 12:34:31 np0005464214.novalocal python3[2212]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:31 np0005464214.novalocal python3[2236]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:31 np0005464214.novalocal python3[2260]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:31 np0005464214.novalocal python3[2284]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:32 np0005464214.novalocal python3[2308]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:32 np0005464214.novalocal python3[2332]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:34 np0005464214.novalocal sudo[2356]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxokrqjznyyzgnoovgudlurvallffgmd ; /usr/bin/python3'
Oct 01 12:34:34 np0005464214.novalocal sudo[2356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:34 np0005464214.novalocal python3[2358]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:34 np0005464214.novalocal sudo[2356]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:34 np0005464214.novalocal sudo[2436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duqishqkhqimdrshfpovsvaldfazkyih ; /usr/bin/python3'
Oct 01 12:34:34 np0005464214.novalocal sudo[2436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:34 np0005464214.novalocal python3[2438]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:34:34 np0005464214.novalocal sudo[2436]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:35 np0005464214.novalocal sudo[2509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajejrdqpkivrvjwaeubmcujhhpmucxil ; /usr/bin/python3'
Oct 01 12:34:35 np0005464214.novalocal sudo[2509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:35 np0005464214.novalocal sshd-session[2359]: Invalid user ubuntu from 121.142.87.218 port 34074
Oct 01 12:34:35 np0005464214.novalocal python3[2511]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759322074.3750746-21-55360735765095/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:35 np0005464214.novalocal sudo[2509]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:35 np0005464214.novalocal sshd-session[2359]: Received disconnect from 121.142.87.218 port 34074:11: Bye Bye [preauth]
Oct 01 12:34:35 np0005464214.novalocal sshd-session[2359]: Disconnected from invalid user ubuntu 121.142.87.218 port 34074 [preauth]
Oct 01 12:34:35 np0005464214.novalocal python3[2559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:36 np0005464214.novalocal python3[2583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:36 np0005464214.novalocal python3[2607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:36 np0005464214.novalocal python3[2631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:37 np0005464214.novalocal python3[2655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:37 np0005464214.novalocal python3[2679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:37 np0005464214.novalocal python3[2703]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:37 np0005464214.novalocal python3[2727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:38 np0005464214.novalocal python3[2751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:38 np0005464214.novalocal python3[2775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:38 np0005464214.novalocal python3[2801]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:38 np0005464214.novalocal python3[2825]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:39 np0005464214.novalocal python3[2849]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:39 np0005464214.novalocal python3[2873]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:39 np0005464214.novalocal python3[2897]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:40 np0005464214.novalocal python3[2921]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:40 np0005464214.novalocal sshd-session[2776]: Invalid user factorio from 49.49.32.245 port 46104
Oct 01 12:34:40 np0005464214.novalocal python3[2945]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:40 np0005464214.novalocal sshd-session[2776]: Received disconnect from 49.49.32.245 port 46104:11: Bye Bye [preauth]
Oct 01 12:34:40 np0005464214.novalocal sshd-session[2776]: Disconnected from invalid user factorio 49.49.32.245 port 46104 [preauth]
Oct 01 12:34:40 np0005464214.novalocal python3[2969]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:40 np0005464214.novalocal python3[2993]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:41 np0005464214.novalocal python3[3017]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:41 np0005464214.novalocal python3[3041]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:41 np0005464214.novalocal python3[3065]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:41 np0005464214.novalocal python3[3089]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:42 np0005464214.novalocal python3[3113]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:42 np0005464214.novalocal python3[3137]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:42 np0005464214.novalocal python3[3161]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:34:45 np0005464214.novalocal sudo[3185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvwqpvsxcgcyvtgcqmytaoxwasefsjkv ; /usr/bin/python3'
Oct 01 12:34:45 np0005464214.novalocal sudo[3185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:45 np0005464214.novalocal python3[3187]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 01 12:34:45 np0005464214.novalocal systemd[1]: Starting Time & Date Service...
Oct 01 12:34:45 np0005464214.novalocal systemd[1]: Started Time & Date Service.
Oct 01 12:34:45 np0005464214.novalocal systemd-timedated[3189]: Changed time zone to 'UTC' (UTC).
Oct 01 12:34:46 np0005464214.novalocal sudo[3185]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:46 np0005464214.novalocal sudo[3216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grlfooacthkmckubjspzdblfikdlhckg ; /usr/bin/python3'
Oct 01 12:34:46 np0005464214.novalocal sudo[3216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:46 np0005464214.novalocal irqbalance[814]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 01 12:34:46 np0005464214.novalocal irqbalance[814]: IRQ 26 affinity is now unmanaged
Oct 01 12:34:46 np0005464214.novalocal python3[3218]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:46 np0005464214.novalocal sudo[3216]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:46 np0005464214.novalocal python3[3294]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:34:47 np0005464214.novalocal python3[3365]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759322086.5854244-153-130122549945456/source _original_basename=tmpm92q9y6g follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:47 np0005464214.novalocal python3[3465]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:34:47 np0005464214.novalocal python3[3536]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759322087.4081943-183-237794728876795/source _original_basename=tmpq4ydf85d follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:48 np0005464214.novalocal sudo[3636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feuvuxemomlsdqxupxwauweocogiiwrj ; /usr/bin/python3'
Oct 01 12:34:48 np0005464214.novalocal sudo[3636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:48 np0005464214.novalocal python3[3638]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:34:48 np0005464214.novalocal sudo[3636]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:48 np0005464214.novalocal sudo[3709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqoswvhznzcnuiarpqfykditcwnqxgee ; /usr/bin/python3'
Oct 01 12:34:48 np0005464214.novalocal sudo[3709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:49 np0005464214.novalocal python3[3711]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759322088.425242-231-243088015263010/source _original_basename=tmp08i8081e follow=False checksum=2bc1eb5288b1fcb7738d7061543c90ea94f5f91e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:49 np0005464214.novalocal sudo[3709]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:49 np0005464214.novalocal sshd-session[3712]: Invalid user so from 217.154.42.86 port 52996
Oct 01 12:34:49 np0005464214.novalocal sshd-session[3712]: Received disconnect from 217.154.42.86 port 52996:11: Bye Bye [preauth]
Oct 01 12:34:49 np0005464214.novalocal sshd-session[3712]: Disconnected from invalid user so 217.154.42.86 port 52996 [preauth]
Oct 01 12:34:49 np0005464214.novalocal python3[3761]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:34:49 np0005464214.novalocal python3[3787]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:34:50 np0005464214.novalocal sudo[3865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzyjysqcpkneevashpcntsfsujsifbqs ; /usr/bin/python3'
Oct 01 12:34:50 np0005464214.novalocal sudo[3865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:50 np0005464214.novalocal python3[3867]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:34:50 np0005464214.novalocal sudo[3865]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:50 np0005464214.novalocal sudo[3938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dowienejfigaiefzuprixhooknynjetf ; /usr/bin/python3'
Oct 01 12:34:50 np0005464214.novalocal sudo[3938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:50 np0005464214.novalocal python3[3940]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759322090.2296793-273-122461588494005/source _original_basename=tmp7x9lrbly follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:34:50 np0005464214.novalocal sudo[3938]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:51 np0005464214.novalocal sudo[3989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjlrneedpjuyjkggfzevlthbzxcfadrv ; /usr/bin/python3'
Oct 01 12:34:51 np0005464214.novalocal sudo[3989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:34:51 np0005464214.novalocal python3[3991]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-9ea9-e8da-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:34:51 np0005464214.novalocal sudo[3989]: pam_unix(sudo:session): session closed for user root
Oct 01 12:34:52 np0005464214.novalocal python3[4019]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-9ea9-e8da-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 01 12:34:53 np0005464214.novalocal python3[4048]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:35:10 np0005464214.novalocal sshd-session[4049]: Invalid user so from 175.126.166.172 port 35774
Oct 01 12:35:10 np0005464214.novalocal sshd-session[4049]: Received disconnect from 175.126.166.172 port 35774:11: Bye Bye [preauth]
Oct 01 12:35:10 np0005464214.novalocal sshd-session[4049]: Disconnected from invalid user so 175.126.166.172 port 35774 [preauth]
Oct 01 12:35:12 np0005464214.novalocal sudo[4074]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dokvntqsdgfzkgvdatlthtsiqgikyglh ; /usr/bin/python3'
Oct 01 12:35:12 np0005464214.novalocal sudo[4074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:35:12 np0005464214.novalocal python3[4076]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:35:12 np0005464214.novalocal sudo[4074]: pam_unix(sudo:session): session closed for user root
Oct 01 12:35:16 np0005464214.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 01 12:35:43 np0005464214.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 01 12:35:43 np0005464214.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 01 12:35:43 np0005464214.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 01 12:35:43 np0005464214.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 01 12:35:43 np0005464214.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 01 12:35:43 np0005464214.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 01 12:35:43 np0005464214.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 01 12:35:43 np0005464214.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 01 12:35:43 np0005464214.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 01 12:35:43 np0005464214.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7464] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 01 12:35:43 np0005464214.novalocal systemd-udevd[4080]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7620] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7649] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7652] device (eth1): carrier: link connected
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7655] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7661] policy: auto-activating connection 'Wired connection 1' (5676b0c3-8d77-3352-b8fd-5d58f5ca7d01)
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7664] device (eth1): Activation: starting connection 'Wired connection 1' (5676b0c3-8d77-3352-b8fd-5d58f5ca7d01)
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7665] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7668] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7673] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 12:35:43 np0005464214.novalocal NetworkManager[860]: <info>  [1759322143.7677] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 01 12:35:44 np0005464214.novalocal sshd-session[4083]: Received disconnect from 217.154.42.86 port 48122:11: Bye Bye [preauth]
Oct 01 12:35:44 np0005464214.novalocal sshd-session[4083]: Disconnected from authenticating user root 217.154.42.86 port 48122 [preauth]
Oct 01 12:35:44 np0005464214.novalocal python3[4108]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-0426-e037-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:35:54 np0005464214.novalocal sudo[4186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khrkpicsigylsihguzsluqnbzjrrbuur ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 12:35:54 np0005464214.novalocal sudo[4186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:35:54 np0005464214.novalocal python3[4188]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:35:54 np0005464214.novalocal sudo[4186]: pam_unix(sudo:session): session closed for user root
Oct 01 12:35:55 np0005464214.novalocal sudo[4259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etjhefcpveiqrvaagxuczkowqzqtufjv ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 12:35:55 np0005464214.novalocal sudo[4259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:35:55 np0005464214.novalocal python3[4261]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759322154.4906135-102-68120778137278/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=8b82f67ed0e41d8d56e27dffdca8d2cb2902b0bf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:35:55 np0005464214.novalocal sudo[4259]: pam_unix(sudo:session): session closed for user root
Oct 01 12:35:55 np0005464214.novalocal sudo[4313]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnfapfjeiqtbbjrzwauxqnsmpzwdlcnk ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 12:35:55 np0005464214.novalocal sudo[4313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:35:56 np0005464214.novalocal python3[4315]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Stopping Network Manager...
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[860]: <info>  [1759322156.0498] caught SIGTERM, shutting down normally.
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[860]: <info>  [1759322156.0505] dhcp4 (eth0): canceled DHCP transaction
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[860]: <info>  [1759322156.0506] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[860]: <info>  [1759322156.0506] dhcp4 (eth0): state changed no lease
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[860]: <info>  [1759322156.0508] manager: NetworkManager state is now CONNECTING
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[860]: <info>  [1759322156.0617] dhcp4 (eth1): canceled DHCP transaction
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[860]: <info>  [1759322156.0618] dhcp4 (eth1): state changed no lease
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[860]: <info>  [1759322156.0657] exiting (success)
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Stopped Network Manager.
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: NetworkManager.service: Consumed 26.865s CPU time, 9.9M memory peak.
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Starting Network Manager...
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1216] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:59648e32-2da2-4a47-989c-dbddfc6922f6)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1219] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1264] manager[0x5602c4630070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Starting Hostname Service...
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Started Hostname Service.
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1922] hostname: hostname: using hostnamed
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1923] hostname: static hostname changed from (none) to "np0005464214.novalocal"
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1927] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1930] manager[0x5602c4630070]: rfkill: Wi-Fi hardware radio set enabled
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1931] manager[0x5602c4630070]: rfkill: WWAN hardware radio set enabled
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1952] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1952] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1953] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1953] manager: Networking is enabled by state file
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1955] settings: Loaded settings plugin: keyfile (internal)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1958] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1979] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1987] dhcp: init: Using DHCP client 'internal'
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1989] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1993] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.1998] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2004] device (lo): Activation: starting connection 'lo' (71a0a298-c086-43ce-b223-7fae93260bdf)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2010] device (eth0): carrier: link connected
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2013] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2016] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2017] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2023] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2028] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2032] device (eth1): carrier: link connected
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2038] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2043] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5676b0c3-8d77-3352-b8fd-5d58f5ca7d01) (indicated)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2043] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2048] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2053] device (eth1): Activation: starting connection 'Wired connection 1' (5676b0c3-8d77-3352-b8fd-5d58f5ca7d01)
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Started Network Manager.
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2059] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2062] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2064] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2065] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2067] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2069] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2071] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2073] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2076] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2082] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2084] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2090] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2092] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2107] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2108] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2112] device (lo): Activation: successful, device activated.
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2128] dhcp4 (eth0): state changed new lease, address=38.102.83.245
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2132] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2202] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2214] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2215] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2219] manager: NetworkManager state is now CONNECTED_SITE
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2221] device (eth0): Activation: successful, device activated.
Oct 01 12:35:56 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322156.2225] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 01 12:35:56 np0005464214.novalocal sudo[4313]: pam_unix(sudo:session): session closed for user root
Oct 01 12:35:56 np0005464214.novalocal python3[4402]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-0426-e037-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:35:56 np0005464214.novalocal sshd-session[4288]: Invalid user hamed from 121.142.87.218 port 57148
Oct 01 12:35:56 np0005464214.novalocal sshd-session[4288]: Received disconnect from 121.142.87.218 port 57148:11: Bye Bye [preauth]
Oct 01 12:35:56 np0005464214.novalocal sshd-session[4288]: Disconnected from invalid user hamed 121.142.87.218 port 57148 [preauth]
Oct 01 12:35:57 np0005464214.novalocal sshd-session[4286]: Received disconnect from 49.49.32.245 port 41270:11: Bye Bye [preauth]
Oct 01 12:35:57 np0005464214.novalocal sshd-session[4286]: Disconnected from authenticating user root 49.49.32.245 port 41270 [preauth]
Oct 01 12:35:57 np0005464214.novalocal sshd-session[4377]: Invalid user eva from 45.249.247.86 port 40610
Oct 01 12:35:57 np0005464214.novalocal sshd-session[4377]: Received disconnect from 45.249.247.86 port 40610:11: Bye Bye [preauth]
Oct 01 12:35:57 np0005464214.novalocal sshd-session[4377]: Disconnected from invalid user eva 45.249.247.86 port 40610 [preauth]
Oct 01 12:36:06 np0005464214.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 12:36:26 np0005464214.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 01 12:36:28 np0005464214.novalocal sshd-session[4409]: Invalid user ramesh from 175.126.166.172 port 59982
Oct 01 12:36:28 np0005464214.novalocal sshd-session[4409]: Received disconnect from 175.126.166.172 port 59982:11: Bye Bye [preauth]
Oct 01 12:36:28 np0005464214.novalocal sshd-session[4409]: Disconnected from invalid user ramesh 175.126.166.172 port 59982 [preauth]
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.2690] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 01 12:36:41 np0005464214.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 12:36:41 np0005464214.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.2925] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.2927] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.2935] device (eth1): Activation: successful, device activated.
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.2941] manager: startup complete
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.2943] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <warn>  [1759322201.2948] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.2956] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 01 12:36:41 np0005464214.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3057] dhcp4 (eth1): canceled DHCP transaction
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3058] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3058] dhcp4 (eth1): state changed no lease
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3072] policy: auto-activating connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c)
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3076] device (eth1): Activation: starting connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c)
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3077] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3079] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3086] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3093] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3127] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3129] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 12:36:41 np0005464214.novalocal NetworkManager[4330]: <info>  [1759322201.3134] device (eth1): Activation: successful, device activated.
Oct 01 12:36:44 np0005464214.novalocal sshd-session[4435]: Invalid user grafana from 217.154.42.86 port 45214
Oct 01 12:36:44 np0005464214.novalocal sshd-session[4435]: Received disconnect from 217.154.42.86 port 45214:11: Bye Bye [preauth]
Oct 01 12:36:44 np0005464214.novalocal sshd-session[4435]: Disconnected from invalid user grafana 217.154.42.86 port 45214 [preauth]
Oct 01 12:36:51 np0005464214.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 12:36:53 np0005464214.novalocal sudo[4512]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuynncdxkmbosgfseenpxgnjipmmqjej ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 12:36:53 np0005464214.novalocal sudo[4512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:36:53 np0005464214.novalocal python3[4514]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:36:53 np0005464214.novalocal sudo[4512]: pam_unix(sudo:session): session closed for user root
Oct 01 12:36:53 np0005464214.novalocal sudo[4585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixvzfcbwhkcjgzycjcuyaijlbpnniqpt ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 01 12:36:53 np0005464214.novalocal sudo[4585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:36:54 np0005464214.novalocal python3[4587]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759322213.4361048-267-72837742853929/source _original_basename=tmp3gi1m3_u follow=False checksum=657dff622f384eae175b3b6dde958f4cf16720ee backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:36:54 np0005464214.novalocal sudo[4585]: pam_unix(sudo:session): session closed for user root
Oct 01 12:36:54 np0005464214.novalocal systemd[1423]: Starting Mark boot as successful...
Oct 01 12:36:54 np0005464214.novalocal systemd[1423]: Finished Mark boot as successful.
Oct 01 12:36:56 np0005464214.novalocal irqbalance[814]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 01 12:36:56 np0005464214.novalocal irqbalance[814]: IRQ 27 affinity is now unmanaged
Oct 01 12:37:15 np0005464214.novalocal sshd-session[4613]: Received disconnect from 49.49.32.245 port 36440:11: Bye Bye [preauth]
Oct 01 12:37:15 np0005464214.novalocal sshd-session[4613]: Disconnected from authenticating user root 49.49.32.245 port 36440 [preauth]
Oct 01 12:37:15 np0005464214.novalocal sshd-session[4615]: Invalid user ljsoft from 121.142.87.218 port 51988
Oct 01 12:37:15 np0005464214.novalocal sshd-session[4615]: Received disconnect from 121.142.87.218 port 51988:11: Bye Bye [preauth]
Oct 01 12:37:15 np0005464214.novalocal sshd-session[4615]: Disconnected from invalid user ljsoft 121.142.87.218 port 51988 [preauth]
Oct 01 12:37:34 np0005464214.novalocal sshd-session[4617]: Invalid user nb from 45.249.247.86 port 40714
Oct 01 12:37:34 np0005464214.novalocal sshd-session[4617]: Received disconnect from 45.249.247.86 port 40714:11: Bye Bye [preauth]
Oct 01 12:37:34 np0005464214.novalocal sshd-session[4617]: Disconnected from invalid user nb 45.249.247.86 port 40714 [preauth]
Oct 01 12:37:38 np0005464214.novalocal sshd-session[4619]: Invalid user eva from 217.154.42.86 port 34760
Oct 01 12:37:38 np0005464214.novalocal sshd-session[4619]: Received disconnect from 217.154.42.86 port 34760:11: Bye Bye [preauth]
Oct 01 12:37:38 np0005464214.novalocal sshd-session[4619]: Disconnected from invalid user eva 217.154.42.86 port 34760 [preauth]
Oct 01 12:37:44 np0005464214.novalocal sshd-session[4621]: Invalid user pavan from 175.126.166.172 port 51842
Oct 01 12:37:44 np0005464214.novalocal sshd-session[4621]: Received disconnect from 175.126.166.172 port 51842:11: Bye Bye [preauth]
Oct 01 12:37:44 np0005464214.novalocal sshd-session[4621]: Disconnected from invalid user pavan 175.126.166.172 port 51842 [preauth]
Oct 01 12:37:54 np0005464214.novalocal sshd-session[1434]: Received disconnect from 38.102.83.114 port 55192:11: disconnected by user
Oct 01 12:37:54 np0005464214.novalocal sshd-session[1434]: Disconnected from user zuul 38.102.83.114 port 55192
Oct 01 12:37:54 np0005464214.novalocal sshd-session[1419]: pam_unix(sshd:session): session closed for user zuul
Oct 01 12:37:54 np0005464214.novalocal systemd-logind[818]: Session 1 logged out. Waiting for processes to exit.
Oct 01 12:38:28 np0005464214.novalocal sshd-session[4623]: Invalid user git from 121.142.87.218 port 46814
Oct 01 12:38:28 np0005464214.novalocal sshd-session[4623]: Received disconnect from 121.142.87.218 port 46814:11: Bye Bye [preauth]
Oct 01 12:38:28 np0005464214.novalocal sshd-session[4623]: Disconnected from invalid user git 121.142.87.218 port 46814 [preauth]
Oct 01 12:38:33 np0005464214.novalocal sshd-session[4625]: Invalid user admin from 49.49.32.245 port 59834
Oct 01 12:38:34 np0005464214.novalocal sshd-session[4625]: Received disconnect from 49.49.32.245 port 59834:11: Bye Bye [preauth]
Oct 01 12:38:34 np0005464214.novalocal sshd-session[4625]: Disconnected from invalid user admin 49.49.32.245 port 59834 [preauth]
Oct 01 12:38:57 np0005464214.novalocal sshd-session[4627]: Invalid user seekcy from 175.126.166.172 port 39236
Oct 01 12:38:58 np0005464214.novalocal sshd-session[4627]: Received disconnect from 175.126.166.172 port 39236:11: Bye Bye [preauth]
Oct 01 12:38:58 np0005464214.novalocal sshd-session[4627]: Disconnected from invalid user seekcy 175.126.166.172 port 39236 [preauth]
Oct 01 12:39:05 np0005464214.novalocal sshd-session[4629]: Invalid user dci from 45.249.247.86 port 42550
Oct 01 12:39:05 np0005464214.novalocal sshd-session[4629]: Received disconnect from 45.249.247.86 port 42550:11: Bye Bye [preauth]
Oct 01 12:39:05 np0005464214.novalocal sshd-session[4629]: Disconnected from invalid user dci 45.249.247.86 port 42550 [preauth]
Oct 01 12:39:39 np0005464214.novalocal sshd-session[4632]: Invalid user seekcy from 121.142.87.218 port 41646
Oct 01 12:39:39 np0005464214.novalocal sshd-session[4632]: Received disconnect from 121.142.87.218 port 41646:11: Bye Bye [preauth]
Oct 01 12:39:39 np0005464214.novalocal sshd-session[4632]: Disconnected from invalid user seekcy 121.142.87.218 port 41646 [preauth]
Oct 01 12:39:47 np0005464214.novalocal sshd-session[4634]: Invalid user nb from 49.49.32.245 port 54990
Oct 01 12:39:47 np0005464214.novalocal sshd-session[4634]: Received disconnect from 49.49.32.245 port 54990:11: Bye Bye [preauth]
Oct 01 12:39:47 np0005464214.novalocal sshd-session[4634]: Disconnected from invalid user nb 49.49.32.245 port 54990 [preauth]
Oct 01 12:39:54 np0005464214.novalocal systemd[1423]: Created slice User Background Tasks Slice.
Oct 01 12:39:54 np0005464214.novalocal systemd[1423]: Starting Cleanup of User's Temporary Files and Directories...
Oct 01 12:39:54 np0005464214.novalocal systemd[1423]: Finished Cleanup of User's Temporary Files and Directories.
Oct 01 12:40:12 np0005464214.novalocal sshd-session[4638]: Invalid user teste from 175.126.166.172 port 41004
Oct 01 12:40:12 np0005464214.novalocal sshd-session[4638]: Received disconnect from 175.126.166.172 port 41004:11: Bye Bye [preauth]
Oct 01 12:40:12 np0005464214.novalocal sshd-session[4638]: Disconnected from invalid user teste 175.126.166.172 port 41004 [preauth]
Oct 01 12:40:40 np0005464214.novalocal systemd[1]: Starting dnf makecache...
Oct 01 12:40:40 np0005464214.novalocal sshd-session[4640]: Invalid user seekcy from 45.249.247.86 port 56862
Oct 01 12:40:40 np0005464214.novalocal dnf[4642]: Metadata cache refreshed recently.
Oct 01 12:40:40 np0005464214.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 01 12:40:40 np0005464214.novalocal systemd[1]: Finished dnf makecache.
Oct 01 12:40:40 np0005464214.novalocal sshd-session[4640]: Received disconnect from 45.249.247.86 port 56862:11: Bye Bye [preauth]
Oct 01 12:40:40 np0005464214.novalocal sshd-session[4640]: Disconnected from invalid user seekcy 45.249.247.86 port 56862 [preauth]
Oct 01 12:40:57 np0005464214.novalocal sshd-session[4644]: Invalid user hung from 121.142.87.218 port 36484
Oct 01 12:40:57 np0005464214.novalocal sshd-session[4644]: Received disconnect from 121.142.87.218 port 36484:11: Bye Bye [preauth]
Oct 01 12:40:57 np0005464214.novalocal sshd-session[4644]: Disconnected from invalid user hung 121.142.87.218 port 36484 [preauth]
Oct 01 12:41:00 np0005464214.novalocal sshd-session[4646]: Invalid user khoa from 49.49.32.245 port 50158
Oct 01 12:41:01 np0005464214.novalocal sshd-session[4646]: Received disconnect from 49.49.32.245 port 50158:11: Bye Bye [preauth]
Oct 01 12:41:01 np0005464214.novalocal sshd-session[4646]: Disconnected from invalid user khoa 49.49.32.245 port 50158 [preauth]
Oct 01 12:41:26 np0005464214.novalocal sshd-session[4648]: Invalid user ubuntu from 175.126.166.172 port 40052
Oct 01 12:41:26 np0005464214.novalocal sshd-session[4648]: Received disconnect from 175.126.166.172 port 40052:11: Bye Bye [preauth]
Oct 01 12:41:26 np0005464214.novalocal sshd-session[4648]: Disconnected from invalid user ubuntu 175.126.166.172 port 40052 [preauth]
Oct 01 12:42:11 np0005464214.novalocal sshd-session[4655]: Accepted publickey for zuul from 38.102.83.114 port 56294 ssh2: RSA SHA256:tSx7W6G1Z7aOy2GAa2AuzDc8oXNjA1+IQNz1loW/bEk
Oct 01 12:42:11 np0005464214.novalocal systemd-logind[818]: New session 3 of user zuul.
Oct 01 12:42:11 np0005464214.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 01 12:42:11 np0005464214.novalocal sshd-session[4655]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 12:42:11 np0005464214.novalocal sudo[4682]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwnwxjdtrvwaaidxoqicevpupiowowjk ; /usr/bin/python3'
Oct 01 12:42:11 np0005464214.novalocal sudo[4682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:11 np0005464214.novalocal python3[4684]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-bc20-fbfc-000000001cea-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:42:11 np0005464214.novalocal sudo[4682]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:11 np0005464214.novalocal sudo[4711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuilaimduailnugrhswuheyvpndbmiqm ; /usr/bin/python3'
Oct 01 12:42:11 np0005464214.novalocal sudo[4711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:11 np0005464214.novalocal sshd-session[4652]: Invalid user denver from 121.142.87.218 port 59548
Oct 01 12:42:11 np0005464214.novalocal python3[4713]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:42:11 np0005464214.novalocal sudo[4711]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:11 np0005464214.novalocal sudo[4737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjrtodscwyyzlovprikqmuglgidtxogl ; /usr/bin/python3'
Oct 01 12:42:11 np0005464214.novalocal sudo[4737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:11 np0005464214.novalocal sshd-session[4652]: Received disconnect from 121.142.87.218 port 59548:11: Bye Bye [preauth]
Oct 01 12:42:11 np0005464214.novalocal sshd-session[4652]: Disconnected from invalid user denver 121.142.87.218 port 59548 [preauth]
Oct 01 12:42:11 np0005464214.novalocal python3[4739]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:42:11 np0005464214.novalocal sudo[4737]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:12 np0005464214.novalocal sudo[4763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezuavgnqodafqnbuzqipliitfuzjfdkc ; /usr/bin/python3'
Oct 01 12:42:12 np0005464214.novalocal sudo[4763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:12 np0005464214.novalocal python3[4765]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:42:12 np0005464214.novalocal sudo[4763]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:12 np0005464214.novalocal sudo[4789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfdvanfaiwhxfjlebjqbyegvaunuiscz ; /usr/bin/python3'
Oct 01 12:42:12 np0005464214.novalocal sudo[4789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:12 np0005464214.novalocal python3[4791]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:42:12 np0005464214.novalocal sudo[4789]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:12 np0005464214.novalocal sudo[4816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boltgqzsrfikyxcmjivedvesxbfzachi ; /usr/bin/python3'
Oct 01 12:42:12 np0005464214.novalocal sudo[4816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:12 np0005464214.novalocal python3[4819]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:42:12 np0005464214.novalocal python3[4819]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 01 12:42:13 np0005464214.novalocal sudo[4816]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:13 np0005464214.novalocal sudo[4843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwphkvtiukzofneviolquqfxkupeyqcv ; /usr/bin/python3'
Oct 01 12:42:13 np0005464214.novalocal sudo[4843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:13 np0005464214.novalocal python3[4845]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 12:42:13 np0005464214.novalocal systemd[1]: Reloading.
Oct 01 12:42:13 np0005464214.novalocal systemd-rc-local-generator[4864]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 12:42:14 np0005464214.novalocal sshd-session[4815]: Invalid user seekcy from 45.249.247.86 port 37630
Oct 01 12:42:14 np0005464214.novalocal sudo[4843]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:14 np0005464214.novalocal sshd-session[4815]: Received disconnect from 45.249.247.86 port 37630:11: Bye Bye [preauth]
Oct 01 12:42:14 np0005464214.novalocal sshd-session[4815]: Disconnected from invalid user seekcy 45.249.247.86 port 37630 [preauth]
Oct 01 12:42:15 np0005464214.novalocal sudo[4899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzegokdfgfqzivkywzxglbxypdfamocx ; /usr/bin/python3'
Oct 01 12:42:15 np0005464214.novalocal sudo[4899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:15 np0005464214.novalocal python3[4901]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 01 12:42:15 np0005464214.novalocal sudo[4899]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:15 np0005464214.novalocal sudo[4925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvutwsiyzucoocdhwowzbaswewfnczvq ; /usr/bin/python3'
Oct 01 12:42:15 np0005464214.novalocal sudo[4925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:15 np0005464214.novalocal python3[4927]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:42:15 np0005464214.novalocal sudo[4925]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:15 np0005464214.novalocal sudo[4953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kengagblbpeemobgrjgmvbcmmloudldc ; /usr/bin/python3'
Oct 01 12:42:16 np0005464214.novalocal sudo[4953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:16 np0005464214.novalocal python3[4955]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:42:16 np0005464214.novalocal sudo[4953]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:16 np0005464214.novalocal sudo[4981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stngvhnasdmckireozvmzmqokuwdyclm ; /usr/bin/python3'
Oct 01 12:42:16 np0005464214.novalocal sudo[4981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:16 np0005464214.novalocal python3[4983]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:42:16 np0005464214.novalocal sudo[4981]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:16 np0005464214.novalocal sudo[5009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwlihaklrixcnswdcduuairtiaqahmhu ; /usr/bin/python3'
Oct 01 12:42:16 np0005464214.novalocal sudo[5009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:16 np0005464214.novalocal python3[5011]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:42:16 np0005464214.novalocal sudo[5009]: pam_unix(sudo:session): session closed for user root
Oct 01 12:42:17 np0005464214.novalocal python3[5038]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-bc20-fbfc-000000001cf0-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:42:17 np0005464214.novalocal python3[5068]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 12:42:19 np0005464214.novalocal sshd-session[4658]: Connection closed by 38.102.83.114 port 56294
Oct 01 12:42:19 np0005464214.novalocal sshd-session[4655]: pam_unix(sshd:session): session closed for user zuul
Oct 01 12:42:19 np0005464214.novalocal systemd-logind[818]: Session 3 logged out. Waiting for processes to exit.
Oct 01 12:42:19 np0005464214.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 01 12:42:19 np0005464214.novalocal systemd[1]: session-3.scope: Consumed 3.494s CPU time.
Oct 01 12:42:19 np0005464214.novalocal systemd-logind[818]: Removed session 3.
Oct 01 12:42:21 np0005464214.novalocal sshd-session[5076]: Accepted publickey for zuul from 38.102.83.114 port 53008 ssh2: RSA SHA256:tSx7W6G1Z7aOy2GAa2AuzDc8oXNjA1+IQNz1loW/bEk
Oct 01 12:42:21 np0005464214.novalocal systemd-logind[818]: New session 4 of user zuul.
Oct 01 12:42:21 np0005464214.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 01 12:42:21 np0005464214.novalocal sshd-session[5076]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 12:42:21 np0005464214.novalocal sudo[5103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlcmutipsnakebeoogdeudvegynsmmmn ; /usr/bin/python3'
Oct 01 12:42:21 np0005464214.novalocal sudo[5103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:42:21 np0005464214.novalocal python3[5105]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 12:42:22 np0005464214.novalocal sshd-session[5069]: Received disconnect from 49.49.32.245 port 45320:11: Bye Bye [preauth]
Oct 01 12:42:22 np0005464214.novalocal sshd-session[5069]: Disconnected from authenticating user root 49.49.32.245 port 45320 [preauth]
Oct 01 12:42:35 np0005464214.novalocal kernel: SELinux:  Converting 366 SID table entries...
Oct 01 12:42:35 np0005464214.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 12:42:35 np0005464214.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 01 12:42:35 np0005464214.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 12:42:35 np0005464214.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 01 12:42:35 np0005464214.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 12:42:35 np0005464214.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 12:42:35 np0005464214.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 12:42:42 np0005464214.novalocal sshd-session[5150]: Invalid user admin from 175.126.166.172 port 36804
Oct 01 12:42:42 np0005464214.novalocal sshd-session[5150]: Received disconnect from 175.126.166.172 port 36804:11: Bye Bye [preauth]
Oct 01 12:42:42 np0005464214.novalocal sshd-session[5150]: Disconnected from invalid user admin 175.126.166.172 port 36804 [preauth]
Oct 01 12:42:44 np0005464214.novalocal kernel: SELinux:  Converting 366 SID table entries...
Oct 01 12:42:44 np0005464214.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 12:42:44 np0005464214.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 01 12:42:44 np0005464214.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 12:42:44 np0005464214.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 01 12:42:44 np0005464214.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 12:42:44 np0005464214.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 12:42:44 np0005464214.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 12:42:53 np0005464214.novalocal kernel: SELinux:  Converting 366 SID table entries...
Oct 01 12:42:53 np0005464214.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 12:42:53 np0005464214.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 01 12:42:53 np0005464214.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 12:42:53 np0005464214.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 01 12:42:53 np0005464214.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 12:42:53 np0005464214.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 12:42:53 np0005464214.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 12:42:54 np0005464214.novalocal setsebool[5167]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 01 12:42:54 np0005464214.novalocal setsebool[5167]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 01 12:43:05 np0005464214.novalocal kernel: SELinux:  Converting 369 SID table entries...
Oct 01 12:43:05 np0005464214.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 12:43:05 np0005464214.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 01 12:43:05 np0005464214.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 12:43:05 np0005464214.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 01 12:43:05 np0005464214.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 12:43:05 np0005464214.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 12:43:05 np0005464214.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 12:43:23 np0005464214.novalocal dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 01 12:43:24 np0005464214.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 12:43:24 np0005464214.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 01 12:43:24 np0005464214.novalocal systemd[1]: Reloading.
Oct 01 12:43:24 np0005464214.novalocal systemd-rc-local-generator[5921]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 12:43:24 np0005464214.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 12:43:25 np0005464214.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 01 12:43:25 np0005464214.novalocal PackageKit[6569]: daemon start
Oct 01 12:43:25 np0005464214.novalocal systemd[1]: Starting Authorization Manager...
Oct 01 12:43:25 np0005464214.novalocal polkitd[6665]: Started polkitd version 0.117
Oct 01 12:43:25 np0005464214.novalocal polkitd[6665]: Loading rules from directory /etc/polkit-1/rules.d
Oct 01 12:43:25 np0005464214.novalocal polkitd[6665]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 01 12:43:25 np0005464214.novalocal polkitd[6665]: Finished loading, compiling and executing 3 rules
Oct 01 12:43:25 np0005464214.novalocal systemd[1]: Started Authorization Manager.
Oct 01 12:43:25 np0005464214.novalocal polkitd[6665]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 01 12:43:25 np0005464214.novalocal systemd[1]: Started PackageKit Daemon.
Oct 01 12:43:25 np0005464214.novalocal sudo[5103]: pam_unix(sudo:session): session closed for user root
Oct 01 12:43:26 np0005464214.novalocal python3[7429]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-db01-fad1-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:43:27 np0005464214.novalocal kernel: evm: overlay not supported
Oct 01 12:43:27 np0005464214.novalocal systemd[1423]: Starting D-Bus User Message Bus...
Oct 01 12:43:27 np0005464214.novalocal dbus-broker-launch[8320]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 01 12:43:27 np0005464214.novalocal systemd[1423]: Started D-Bus User Message Bus.
Oct 01 12:43:27 np0005464214.novalocal dbus-broker-launch[8320]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 01 12:43:27 np0005464214.novalocal dbus-broker-lau[8320]: Ready
Oct 01 12:43:27 np0005464214.novalocal systemd[1423]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 01 12:43:27 np0005464214.novalocal systemd[1423]: Created slice Slice /user.
Oct 01 12:43:27 np0005464214.novalocal systemd[1423]: podman-8174.scope: unit configures an IP firewall, but not running as root.
Oct 01 12:43:27 np0005464214.novalocal systemd[1423]: (This warning is only shown for the first unit using IP firewalling.)
Oct 01 12:43:27 np0005464214.novalocal systemd[1423]: Started podman-8174.scope.
Oct 01 12:43:27 np0005464214.novalocal systemd[1423]: Started podman-pause-c7baf0b7.scope.
Oct 01 12:43:27 np0005464214.novalocal sudo[9033]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnhshcdjjlalamvzswnubpphnzwyiwbp ; /usr/bin/python3'
Oct 01 12:43:27 np0005464214.novalocal sudo[9033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:43:28 np0005464214.novalocal python3[9040]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                      location = "38.102.83.113:5001"
                                                      insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                      location = "38.102.83.113:5001"
                                                      insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:43:28 np0005464214.novalocal sudo[9033]: pam_unix(sudo:session): session closed for user root
Oct 01 12:43:28 np0005464214.novalocal sshd-session[5079]: Connection closed by 38.102.83.114 port 53008
Oct 01 12:43:28 np0005464214.novalocal sshd-session[5076]: pam_unix(sshd:session): session closed for user zuul
Oct 01 12:43:28 np0005464214.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 01 12:43:28 np0005464214.novalocal systemd[1]: session-4.scope: Consumed 58.997s CPU time.
Oct 01 12:43:28 np0005464214.novalocal systemd-logind[818]: Session 4 logged out. Waiting for processes to exit.
Oct 01 12:43:28 np0005464214.novalocal systemd-logind[818]: Removed session 4.
Oct 01 12:43:29 np0005464214.novalocal sshd-session[9360]: Received disconnect from 121.142.87.218 port 54384:11: Bye Bye [preauth]
Oct 01 12:43:29 np0005464214.novalocal sshd-session[9360]: Disconnected from authenticating user root 121.142.87.218 port 54384 [preauth]
Oct 01 12:43:42 np0005464214.novalocal sshd-session[14747]: Invalid user seekcy from 49.49.32.245 port 40494
Oct 01 12:43:42 np0005464214.novalocal sshd-session[14747]: Received disconnect from 49.49.32.245 port 40494:11: Bye Bye [preauth]
Oct 01 12:43:42 np0005464214.novalocal sshd-session[14747]: Disconnected from invalid user seekcy 49.49.32.245 port 40494 [preauth]
Oct 01 12:43:47 np0005464214.novalocal sshd-session[17471]: Connection closed by 38.102.83.150 port 51422 [preauth]
Oct 01 12:43:47 np0005464214.novalocal sshd-session[17473]: Connection closed by 38.102.83.150 port 51434 [preauth]
Oct 01 12:43:47 np0005464214.novalocal sshd-session[17477]: Unable to negotiate with 38.102.83.150 port 51442: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 01 12:43:47 np0005464214.novalocal sshd-session[17484]: Unable to negotiate with 38.102.83.150 port 51450: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 01 12:43:47 np0005464214.novalocal sshd-session[17480]: Unable to negotiate with 38.102.83.150 port 51458: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 01 12:43:51 np0005464214.novalocal sshd-session[19223]: Accepted publickey for zuul from 38.102.83.114 port 36650 ssh2: RSA SHA256:tSx7W6G1Z7aOy2GAa2AuzDc8oXNjA1+IQNz1loW/bEk
Oct 01 12:43:51 np0005464214.novalocal systemd-logind[818]: New session 5 of user zuul.
Oct 01 12:43:51 np0005464214.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 01 12:43:51 np0005464214.novalocal sshd-session[19223]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 12:43:52 np0005464214.novalocal python3[19325]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNR4pTn4diMSkjwSG70fVeti9Lf6A4B/Bmz+ENT8b+tD8PK6ZGURxDMk3ySuFdE0LGwIJtSh3Ou06MeEB6m4ODI= zuul@np0005464222.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:43:52 np0005464214.novalocal sudo[19479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kriqkpaoovllhmmbfcyrqaooapuxoeuf ; /usr/bin/python3'
Oct 01 12:43:52 np0005464214.novalocal sudo[19479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:43:52 np0005464214.novalocal python3[19489]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNR4pTn4diMSkjwSG70fVeti9Lf6A4B/Bmz+ENT8b+tD8PK6ZGURxDMk3ySuFdE0LGwIJtSh3Ou06MeEB6m4ODI= zuul@np0005464222.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:43:52 np0005464214.novalocal sudo[19479]: pam_unix(sudo:session): session closed for user root
Oct 01 12:43:53 np0005464214.novalocal sudo[19798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anbgykhfewehojejwvusfggfodvqucyv ; /usr/bin/python3'
Oct 01 12:43:53 np0005464214.novalocal sudo[19798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:43:53 np0005464214.novalocal python3[19807]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005464214.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 01 12:43:53 np0005464214.novalocal useradd[19888]: new group: name=cloud-admin, GID=1002
Oct 01 12:43:53 np0005464214.novalocal useradd[19888]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 01 12:43:53 np0005464214.novalocal sudo[19798]: pam_unix(sudo:session): session closed for user root
Oct 01 12:43:53 np0005464214.novalocal sudo[20027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frqurudckybcroyyeqonszepcdyzjlwa ; /usr/bin/python3'
Oct 01 12:43:53 np0005464214.novalocal sudo[20027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:43:53 np0005464214.novalocal python3[20036]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNR4pTn4diMSkjwSG70fVeti9Lf6A4B/Bmz+ENT8b+tD8PK6ZGURxDMk3ySuFdE0LGwIJtSh3Ou06MeEB6m4ODI= zuul@np0005464222.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 01 12:43:53 np0005464214.novalocal sudo[20027]: pam_unix(sudo:session): session closed for user root
Oct 01 12:43:53 np0005464214.novalocal sudo[20294]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewkivmbnikvngzmlerucomuysnszqnmp ; /usr/bin/python3'
Oct 01 12:43:53 np0005464214.novalocal sudo[20294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:43:53 np0005464214.novalocal python3[20303]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:43:54 np0005464214.novalocal sudo[20294]: pam_unix(sudo:session): session closed for user root
Oct 01 12:43:54 np0005464214.novalocal sudo[20600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbyexsftbciotipqafmrcbiaifgrtmwl ; /usr/bin/python3'
Oct 01 12:43:54 np0005464214.novalocal sudo[20600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:43:54 np0005464214.novalocal python3[20608]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759322633.7521372-135-217831172634294/source _original_basename=tmpzf8apmpw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:43:54 np0005464214.novalocal sudo[20600]: pam_unix(sudo:session): session closed for user root
Oct 01 12:43:55 np0005464214.novalocal sudo[20929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frclfjxxgoqptbfbsgwyfruhuaplnnnp ; /usr/bin/python3'
Oct 01 12:43:55 np0005464214.novalocal sudo[20929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:43:55 np0005464214.novalocal python3[20939]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct 01 12:43:55 np0005464214.novalocal systemd[1]: Starting Hostname Service...
Oct 01 12:43:55 np0005464214.novalocal systemd[1]: Started Hostname Service.
Oct 01 12:43:56 np0005464214.novalocal systemd-hostnamed[21072]: Changed pretty hostname to 'compute-0'
Oct 01 12:43:56 compute-0 systemd-hostnamed[21072]: Hostname set to <compute-0> (static)
Oct 01 12:43:56 compute-0 NetworkManager[4330]: <info>  [1759322636.5176] hostname: static hostname changed from "np0005464214.novalocal" to "compute-0"
Oct 01 12:43:56 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 12:43:56 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 12:43:56 compute-0 sudo[20929]: pam_unix(sudo:session): session closed for user root
Oct 01 12:43:56 compute-0 sshd-session[19272]: Connection closed by 38.102.83.114 port 36650
Oct 01 12:43:56 compute-0 sshd-session[19223]: pam_unix(sshd:session): session closed for user zuul
Oct 01 12:43:56 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Oct 01 12:43:56 compute-0 systemd[1]: session-5.scope: Consumed 2.240s CPU time.
Oct 01 12:43:56 compute-0 systemd-logind[818]: Session 5 logged out. Waiting for processes to exit.
Oct 01 12:43:56 compute-0 systemd-logind[818]: Removed session 5.
Oct 01 12:43:59 compute-0 sshd-session[22547]: Invalid user seekcy from 175.126.166.172 port 38634
Oct 01 12:43:59 compute-0 sshd-session[22547]: Received disconnect from 175.126.166.172 port 38634:11: Bye Bye [preauth]
Oct 01 12:43:59 compute-0 sshd-session[22547]: Disconnected from invalid user seekcy 175.126.166.172 port 38634 [preauth]
Oct 01 12:44:06 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 12:44:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 12:44:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 12:44:10 compute-0 systemd[1]: man-db-cache-update.service: Consumed 55.619s CPU time.
Oct 01 12:44:10 compute-0 systemd[1]: run-r5a754d2d04604b12a8c29cf2632f439c.service: Deactivated successfully.
Oct 01 12:44:26 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 01 12:44:45 compute-0 sshd-session[26961]: Invalid user fernando from 121.142.87.218 port 49230
Oct 01 12:44:46 compute-0 sshd-session[26961]: Received disconnect from 121.142.87.218 port 49230:11: Bye Bye [preauth]
Oct 01 12:44:46 compute-0 sshd-session[26961]: Disconnected from invalid user fernando 121.142.87.218 port 49230 [preauth]
Oct 01 12:45:04 compute-0 sshd-session[26963]: Invalid user steam1 from 49.49.32.245 port 35654
Oct 01 12:45:04 compute-0 sshd-session[26963]: Received disconnect from 49.49.32.245 port 35654:11: Bye Bye [preauth]
Oct 01 12:45:04 compute-0 sshd-session[26963]: Disconnected from invalid user steam1 49.49.32.245 port 35654 [preauth]
Oct 01 12:45:17 compute-0 sshd-session[26966]: Invalid user fff from 175.126.166.172 port 56596
Oct 01 12:45:17 compute-0 sshd-session[26966]: Received disconnect from 175.126.166.172 port 56596:11: Bye Bye [preauth]
Oct 01 12:45:17 compute-0 sshd-session[26966]: Disconnected from invalid user fff 175.126.166.172 port 56596 [preauth]
Oct 01 12:45:18 compute-0 sshd-session[26968]: Connection closed by authenticating user root 185.156.73.233 port 24334 [preauth]
Oct 01 12:45:21 compute-0 sshd-session[26970]: Invalid user ftpuser from 45.249.247.86 port 43780
Oct 01 12:45:21 compute-0 sshd-session[26970]: Received disconnect from 45.249.247.86 port 43780:11: Bye Bye [preauth]
Oct 01 12:45:21 compute-0 sshd-session[26970]: Disconnected from invalid user ftpuser 45.249.247.86 port 43780 [preauth]
Oct 01 12:46:00 compute-0 sshd-session[26973]: Invalid user seekcy from 121.142.87.218 port 44066
Oct 01 12:46:00 compute-0 sshd-session[26973]: Received disconnect from 121.142.87.218 port 44066:11: Bye Bye [preauth]
Oct 01 12:46:00 compute-0 sshd-session[26973]: Disconnected from invalid user seekcy 121.142.87.218 port 44066 [preauth]
Oct 01 12:46:19 compute-0 sshd-session[26975]: Received disconnect from 49.49.32.245 port 59040:11: Bye Bye [preauth]
Oct 01 12:46:19 compute-0 sshd-session[26975]: Disconnected from authenticating user root 49.49.32.245 port 59040 [preauth]
Oct 01 12:46:35 compute-0 sshd-session[26977]: Received disconnect from 175.126.166.172 port 46220:11: Bye Bye [preauth]
Oct 01 12:46:35 compute-0 sshd-session[26977]: Disconnected from authenticating user root 175.126.166.172 port 46220 [preauth]
Oct 01 12:46:55 compute-0 sshd-session[26979]: Invalid user fff from 45.249.247.86 port 38082
Oct 01 12:46:55 compute-0 sshd-session[26979]: Received disconnect from 45.249.247.86 port 38082:11: Bye Bye [preauth]
Oct 01 12:46:55 compute-0 sshd-session[26979]: Disconnected from invalid user fff 45.249.247.86 port 38082 [preauth]
Oct 01 12:47:15 compute-0 sshd-session[26981]: Invalid user etherpad from 121.142.87.218 port 38904
Oct 01 12:47:15 compute-0 sshd-session[26981]: Received disconnect from 121.142.87.218 port 38904:11: Bye Bye [preauth]
Oct 01 12:47:15 compute-0 sshd-session[26981]: Disconnected from invalid user etherpad 121.142.87.218 port 38904 [preauth]
Oct 01 12:47:21 compute-0 sshd-session[26983]: Accepted publickey for zuul from 38.102.83.150 port 32780 ssh2: RSA SHA256:tSx7W6G1Z7aOy2GAa2AuzDc8oXNjA1+IQNz1loW/bEk
Oct 01 12:47:21 compute-0 systemd-logind[818]: New session 6 of user zuul.
Oct 01 12:47:21 compute-0 systemd[1]: Started Session 6 of User zuul.
Oct 01 12:47:21 compute-0 sshd-session[26983]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 12:47:22 compute-0 python3[27059]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:47:23 compute-0 sudo[27173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htvzczwhpazxhudeunohinuabczedpey ; /usr/bin/python3'
Oct 01 12:47:23 compute-0 sudo[27173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:23 compute-0 python3[27175]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:47:23 compute-0 sudo[27173]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:24 compute-0 sudo[27246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwemvhwnhoikjrxmcqeidhhoqiispbxo ; /usr/bin/python3'
Oct 01 12:47:24 compute-0 sudo[27246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:24 compute-0 python3[27248]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:47:24 compute-0 sudo[27246]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:24 compute-0 sudo[27272]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkdewvamoogvqhqxlwtwdtdauoswjpmx ; /usr/bin/python3'
Oct 01 12:47:24 compute-0 sudo[27272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:24 compute-0 python3[27274]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:47:24 compute-0 sudo[27272]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:24 compute-0 sudo[27345]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlfowclnuowhcrnmkunetxtlyhiggguj ; /usr/bin/python3'
Oct 01 12:47:24 compute-0 sudo[27345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:25 compute-0 python3[27347]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:47:25 compute-0 sudo[27345]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:25 compute-0 sudo[27371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlmpactrzyusavasoanxjhdmgvxipiab ; /usr/bin/python3'
Oct 01 12:47:25 compute-0 sudo[27371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:25 compute-0 python3[27373]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:47:25 compute-0 sudo[27371]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:25 compute-0 sudo[27444]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfchctrivuusycuhxlhoajbodnutckpr ; /usr/bin/python3'
Oct 01 12:47:25 compute-0 sudo[27444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:25 compute-0 python3[27446]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:47:25 compute-0 sudo[27444]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:25 compute-0 sudo[27470]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-envazawonztiettnocshtkuwaektssle ; /usr/bin/python3'
Oct 01 12:47:25 compute-0 sudo[27470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:26 compute-0 python3[27472]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:47:26 compute-0 sudo[27470]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:26 compute-0 sudo[27543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztozpuxhnaqrgxwqjaapbaapsqugdptd ; /usr/bin/python3'
Oct 01 12:47:26 compute-0 sudo[27543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:26 compute-0 python3[27545]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:47:26 compute-0 sudo[27543]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:26 compute-0 sudo[27569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kigxksfainasyzlknzyzoesuowdzxtvn ; /usr/bin/python3'
Oct 01 12:47:26 compute-0 sudo[27569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:26 compute-0 python3[27571]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:47:26 compute-0 sudo[27569]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:27 compute-0 sudo[27642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvuozviizstiezdmqwsxsurxvuprfwxf ; /usr/bin/python3'
Oct 01 12:47:27 compute-0 sudo[27642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:27 compute-0 python3[27644]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:47:27 compute-0 sudo[27642]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:27 compute-0 sudo[27668]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxylfjsozlqvjrusyjpadxaovdmpfohz ; /usr/bin/python3'
Oct 01 12:47:27 compute-0 sudo[27668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:27 compute-0 python3[27670]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:47:27 compute-0 sudo[27668]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:27 compute-0 sudo[27741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqkthlrkfmqvctdbsfbtwmeatbdilgwu ; /usr/bin/python3'
Oct 01 12:47:27 compute-0 sudo[27741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:27 compute-0 python3[27743]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:47:27 compute-0 sudo[27741]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:27 compute-0 sudo[27767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfggbzaxeawxjvwfxaxijdynewqfxixi ; /usr/bin/python3'
Oct 01 12:47:27 compute-0 sudo[27767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:28 compute-0 python3[27769]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 12:47:28 compute-0 sudo[27767]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:28 compute-0 sudo[27840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcohysryeuqievumbrqimizrqwehkosf ; /usr/bin/python3'
Oct 01 12:47:28 compute-0 sudo[27840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:47:28 compute-0 python3[27842]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759322843.6316578-30783-20533283724216/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:47:28 compute-0 sudo[27840]: pam_unix(sudo:session): session closed for user root
Oct 01 12:47:30 compute-0 sshd-session[27867]: Connection closed by 192.168.122.11 port 58704 [preauth]
Oct 01 12:47:30 compute-0 sshd-session[27868]: Connection closed by 192.168.122.11 port 58714 [preauth]
Oct 01 12:47:30 compute-0 sshd-session[27869]: Unable to negotiate with 192.168.122.11 port 58728: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 01 12:47:30 compute-0 sshd-session[27870]: Unable to negotiate with 192.168.122.11 port 58730: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 01 12:47:30 compute-0 sshd-session[27871]: Unable to negotiate with 192.168.122.11 port 58734: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 01 12:47:36 compute-0 sshd-session[27877]: Invalid user webdev from 49.49.32.245 port 54210
Oct 01 12:47:36 compute-0 sshd-session[27877]: Received disconnect from 49.49.32.245 port 54210:11: Bye Bye [preauth]
Oct 01 12:47:36 compute-0 sshd-session[27877]: Disconnected from invalid user webdev 49.49.32.245 port 54210 [preauth]
Oct 01 12:47:42 compute-0 python3[27902]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:47:53 compute-0 sshd-session[27905]: Invalid user seekcy from 175.126.166.172 port 42560
Oct 01 12:47:53 compute-0 sshd-session[27905]: Received disconnect from 175.126.166.172 port 42560:11: Bye Bye [preauth]
Oct 01 12:47:53 compute-0 sshd-session[27905]: Disconnected from invalid user seekcy 175.126.166.172 port 42560 [preauth]
Oct 01 12:48:29 compute-0 sshd-session[27907]: Invalid user factorio from 45.249.247.86 port 42996
Oct 01 12:48:29 compute-0 sshd-session[27907]: Received disconnect from 45.249.247.86 port 42996:11: Bye Bye [preauth]
Oct 01 12:48:29 compute-0 sshd-session[27907]: Disconnected from invalid user factorio 45.249.247.86 port 42996 [preauth]
Oct 01 12:48:30 compute-0 PackageKit[6569]: daemon quit
Oct 01 12:48:30 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 01 12:48:32 compute-0 sshd-session[27910]: Invalid user sharan from 121.142.87.218 port 33736
Oct 01 12:48:32 compute-0 sshd-session[27910]: Received disconnect from 121.142.87.218 port 33736:11: Bye Bye [preauth]
Oct 01 12:48:32 compute-0 sshd-session[27910]: Disconnected from invalid user sharan 121.142.87.218 port 33736 [preauth]
Oct 01 12:48:51 compute-0 sshd-session[27912]: Invalid user seekcy from 200.7.101.139 port 59544
Oct 01 12:48:51 compute-0 sshd-session[27912]: Received disconnect from 200.7.101.139 port 59544:11: Bye Bye [preauth]
Oct 01 12:48:51 compute-0 sshd-session[27912]: Disconnected from invalid user seekcy 200.7.101.139 port 59544 [preauth]
Oct 01 12:48:54 compute-0 sshd-session[27914]: Invalid user dci from 49.49.32.245 port 49376
Oct 01 12:48:56 compute-0 sshd-session[27914]: Received disconnect from 49.49.32.245 port 49376:11: Bye Bye [preauth]
Oct 01 12:48:56 compute-0 sshd-session[27914]: Disconnected from invalid user dci 49.49.32.245 port 49376 [preauth]
Oct 01 12:49:13 compute-0 sshd-session[27916]: Invalid user factorio from 175.126.166.172 port 37522
Oct 01 12:49:13 compute-0 sshd-session[27916]: Received disconnect from 175.126.166.172 port 37522:11: Bye Bye [preauth]
Oct 01 12:49:13 compute-0 sshd-session[27916]: Disconnected from invalid user factorio 175.126.166.172 port 37522 [preauth]
Oct 01 12:49:49 compute-0 sshd-session[27918]: Invalid user seekcy from 121.142.87.218 port 56800
Oct 01 12:49:49 compute-0 sshd-session[27918]: Received disconnect from 121.142.87.218 port 56800:11: Bye Bye [preauth]
Oct 01 12:49:49 compute-0 sshd-session[27918]: Disconnected from invalid user seekcy 121.142.87.218 port 56800 [preauth]
Oct 01 12:50:15 compute-0 sshd-session[27920]: Invalid user ftpuser from 49.49.32.245 port 44548
Oct 01 12:50:15 compute-0 sshd-session[27920]: Received disconnect from 49.49.32.245 port 44548:11: Bye Bye [preauth]
Oct 01 12:50:15 compute-0 sshd-session[27920]: Disconnected from invalid user ftpuser 49.49.32.245 port 44548 [preauth]
Oct 01 12:50:31 compute-0 sshd-session[27922]: Received disconnect from 175.126.166.172 port 33444:11: Bye Bye [preauth]
Oct 01 12:50:31 compute-0 sshd-session[27922]: Disconnected from authenticating user root 175.126.166.172 port 33444 [preauth]
Oct 01 12:50:57 compute-0 sshd-session[27924]: Connection closed by 101.36.106.134 port 33896
Oct 01 12:50:57 compute-0 sshd-session[27925]: error: kex_ecdh_dec_key_group: Peer public key import failed [preauth]
Oct 01 12:50:57 compute-0 sshd-session[27925]: ssh_dispatch_run_fatal: Connection from 101.36.106.134 port 34450: error in libcrypto [preauth]
Oct 01 12:50:58 compute-0 sshd-session[27927]: Unable to negotiate with 101.36.106.134 port 35012: no matching host key type found. Their offer: ssh-rsa [preauth]
Oct 01 12:51:04 compute-0 sshd-session[27929]: Invalid user fengyun from 121.142.87.218 port 51636
Oct 01 12:51:05 compute-0 sshd-session[27929]: Received disconnect from 121.142.87.218 port 51636:11: Bye Bye [preauth]
Oct 01 12:51:05 compute-0 sshd-session[27929]: Disconnected from invalid user fengyun 121.142.87.218 port 51636 [preauth]
Oct 01 12:51:10 compute-0 sshd-session[27931]: Invalid user cristi from 156.236.31.46 port 42224
Oct 01 12:51:10 compute-0 sshd-session[27931]: Received disconnect from 156.236.31.46 port 42224:11: Bye Bye [preauth]
Oct 01 12:51:10 compute-0 sshd-session[27931]: Disconnected from invalid user cristi 156.236.31.46 port 42224 [preauth]
Oct 01 12:51:29 compute-0 sshd-session[27933]: Invalid user so from 49.49.32.245 port 39718
Oct 01 12:51:29 compute-0 sshd-session[27933]: Received disconnect from 49.49.32.245 port 39718:11: Bye Bye [preauth]
Oct 01 12:51:29 compute-0 sshd-session[27933]: Disconnected from invalid user so 49.49.32.245 port 39718 [preauth]
Oct 01 12:51:39 compute-0 sshd-session[27935]: Received disconnect from 45.249.247.86 port 39796:11: Bye Bye [preauth]
Oct 01 12:51:39 compute-0 sshd-session[27935]: Disconnected from authenticating user root 45.249.247.86 port 39796 [preauth]
Oct 01 12:51:46 compute-0 sshd-session[27937]: Invalid user khoa from 175.126.166.172 port 36918
Oct 01 12:51:46 compute-0 sshd-session[27937]: Received disconnect from 175.126.166.172 port 36918:11: Bye Bye [preauth]
Oct 01 12:51:46 compute-0 sshd-session[27937]: Disconnected from invalid user khoa 175.126.166.172 port 36918 [preauth]
Oct 01 12:52:42 compute-0 sshd-session[26986]: Received disconnect from 38.102.83.150 port 32780:11: disconnected by user
Oct 01 12:52:42 compute-0 sshd-session[26986]: Disconnected from user zuul 38.102.83.150 port 32780
Oct 01 12:52:42 compute-0 sshd-session[26983]: pam_unix(sshd:session): session closed for user zuul
Oct 01 12:52:42 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 01 12:52:42 compute-0 systemd[1]: session-6.scope: Consumed 5.352s CPU time.
Oct 01 12:52:42 compute-0 systemd-logind[818]: Session 6 logged out. Waiting for processes to exit.
Oct 01 12:52:42 compute-0 systemd-logind[818]: Removed session 6.
Oct 01 12:53:00 compute-0 sshd-session[27940]: Received disconnect from 156.236.31.46 port 42336:11: Bye Bye [preauth]
Oct 01 12:53:00 compute-0 sshd-session[27940]: Disconnected from authenticating user root 156.236.31.46 port 42336 [preauth]
Oct 01 12:53:03 compute-0 sshd-session[27942]: Received disconnect from 200.7.101.139 port 53078:11: Bye Bye [preauth]
Oct 01 12:53:03 compute-0 sshd-session[27942]: Disconnected from authenticating user root 200.7.101.139 port 53078 [preauth]
Oct 01 12:54:02 compute-0 anacron[1137]: Job `cron.monthly' started
Oct 01 12:54:02 compute-0 anacron[1137]: Job `cron.monthly' terminated
Oct 01 12:54:02 compute-0 anacron[1137]: Normal exit (3 jobs run)
Oct 01 12:54:02 compute-0 sshd-session[27947]: Received disconnect from 156.236.31.46 port 42426:11: Bye Bye [preauth]
Oct 01 12:54:02 compute-0 sshd-session[27947]: Disconnected from authenticating user root 156.236.31.46 port 42426 [preauth]
Oct 01 12:54:12 compute-0 sshd-session[27949]: Invalid user sftpuser from 200.7.101.139 port 43824
Oct 01 12:54:12 compute-0 sshd-session[27949]: Received disconnect from 200.7.101.139 port 43824:11: Bye Bye [preauth]
Oct 01 12:54:12 compute-0 sshd-session[27949]: Disconnected from invalid user sftpuser 200.7.101.139 port 43824 [preauth]
Oct 01 12:54:47 compute-0 sshd-session[27951]: Received disconnect from 45.249.247.86 port 35320:11: Bye Bye [preauth]
Oct 01 12:54:47 compute-0 sshd-session[27951]: Disconnected from authenticating user root 45.249.247.86 port 35320 [preauth]
Oct 01 12:55:06 compute-0 sshd-session[27953]: Received disconnect from 156.236.31.46 port 42514:11: Bye Bye [preauth]
Oct 01 12:55:06 compute-0 sshd-session[27953]: Disconnected from authenticating user root 156.236.31.46 port 42514 [preauth]
Oct 01 12:55:24 compute-0 sshd-session[27955]: Invalid user seekcy from 200.7.101.139 port 47818
Oct 01 12:55:24 compute-0 sshd-session[27955]: Received disconnect from 200.7.101.139 port 47818:11: Bye Bye [preauth]
Oct 01 12:55:24 compute-0 sshd-session[27955]: Disconnected from invalid user seekcy 200.7.101.139 port 47818 [preauth]
Oct 01 12:55:32 compute-0 sshd-session[27957]: Invalid user jellyfin from 80.253.31.232 port 49176
Oct 01 12:55:32 compute-0 sshd-session[27957]: Received disconnect from 80.253.31.232 port 49176:11: Bye Bye [preauth]
Oct 01 12:55:32 compute-0 sshd-session[27957]: Disconnected from invalid user jellyfin 80.253.31.232 port 49176 [preauth]
Oct 01 12:56:10 compute-0 sshd-session[27959]: Received disconnect from 156.236.31.46 port 42596:11: Bye Bye [preauth]
Oct 01 12:56:10 compute-0 sshd-session[27959]: Disconnected from authenticating user root 156.236.31.46 port 42596 [preauth]
Oct 01 12:56:24 compute-0 sshd-session[27961]: Invalid user seekcy from 45.249.247.86 port 40204
Oct 01 12:56:24 compute-0 sshd-session[27961]: Received disconnect from 45.249.247.86 port 40204:11: Bye Bye [preauth]
Oct 01 12:56:24 compute-0 sshd-session[27961]: Disconnected from invalid user seekcy 45.249.247.86 port 40204 [preauth]
Oct 01 12:56:33 compute-0 sshd-session[27963]: Invalid user splunk from 200.7.101.139 port 38850
Oct 01 12:56:33 compute-0 sshd-session[27963]: Received disconnect from 200.7.101.139 port 38850:11: Bye Bye [preauth]
Oct 01 12:56:33 compute-0 sshd-session[27963]: Disconnected from invalid user splunk 200.7.101.139 port 38850 [preauth]
Oct 01 12:57:09 compute-0 sshd-session[27965]: Invalid user seekcy from 156.236.31.46 port 42686
Oct 01 12:57:10 compute-0 sshd-session[27965]: Received disconnect from 156.236.31.46 port 42686:11: Bye Bye [preauth]
Oct 01 12:57:10 compute-0 sshd-session[27965]: Disconnected from invalid user seekcy 156.236.31.46 port 42686 [preauth]
Oct 01 12:57:27 compute-0 sshd-session[27967]: Invalid user prueba from 80.94.95.115 port 61660
Oct 01 12:57:27 compute-0 sshd-session[27967]: Connection closed by invalid user prueba 80.94.95.115 port 61660 [preauth]
Oct 01 12:57:38 compute-0 sshd-session[27969]: Invalid user awx from 200.7.101.139 port 57792
Oct 01 12:57:38 compute-0 sshd-session[27969]: Received disconnect from 200.7.101.139 port 57792:11: Bye Bye [preauth]
Oct 01 12:57:38 compute-0 sshd-session[27969]: Disconnected from invalid user awx 200.7.101.139 port 57792 [preauth]
Oct 01 12:57:58 compute-0 sshd-session[27971]: Invalid user s1 from 45.249.247.86 port 33486
Oct 01 12:57:58 compute-0 sshd-session[27971]: Received disconnect from 45.249.247.86 port 33486:11: Bye Bye [preauth]
Oct 01 12:57:58 compute-0 sshd-session[27971]: Disconnected from invalid user s1 45.249.247.86 port 33486 [preauth]
Oct 01 12:57:59 compute-0 sshd-session[27973]: Invalid user helen from 14.103.127.7 port 38532
Oct 01 12:58:00 compute-0 sshd-session[27973]: Received disconnect from 14.103.127.7 port 38532:11: Bye Bye [preauth]
Oct 01 12:58:00 compute-0 sshd-session[27973]: Disconnected from invalid user helen 14.103.127.7 port 38532 [preauth]
Oct 01 12:58:08 compute-0 sshd-session[27976]: Invalid user seekcy from 156.236.31.46 port 42778
Oct 01 12:58:08 compute-0 sshd-session[27976]: Received disconnect from 156.236.31.46 port 42778:11: Bye Bye [preauth]
Oct 01 12:58:08 compute-0 sshd-session[27976]: Disconnected from invalid user seekcy 156.236.31.46 port 42778 [preauth]
Oct 01 12:58:39 compute-0 sshd-session[27980]: Invalid user kkadmin from 27.254.137.144 port 38052
Oct 01 12:58:39 compute-0 sshd-session[27980]: Received disconnect from 27.254.137.144 port 38052:11: Bye Bye [preauth]
Oct 01 12:58:39 compute-0 sshd-session[27980]: Disconnected from invalid user kkadmin 27.254.137.144 port 38052 [preauth]
Oct 01 12:58:43 compute-0 sshd-session[27982]: Received disconnect from 200.7.101.139 port 42986:11: Bye Bye [preauth]
Oct 01 12:58:43 compute-0 sshd-session[27982]: Disconnected from authenticating user root 200.7.101.139 port 42986 [preauth]
Oct 01 12:58:46 compute-0 sshd-session[27985]: Accepted publickey for zuul from 192.168.122.30 port 41704 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 12:58:46 compute-0 systemd-logind[818]: New session 7 of user zuul.
Oct 01 12:58:46 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 01 12:58:46 compute-0 sshd-session[27985]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 12:58:48 compute-0 python3.9[28138]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:58:49 compute-0 sudo[28317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msszncckimggwagmwfyifbbavdtfofcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323528.6443028-32-52372554927161/AnsiballZ_command.py'
Oct 01 12:58:49 compute-0 sudo[28317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:58:49 compute-0 python3.9[28319]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:58:56 compute-0 sudo[28317]: pam_unix(sudo:session): session closed for user root
Oct 01 12:58:56 compute-0 sshd-session[27988]: Connection closed by 192.168.122.30 port 41704
Oct 01 12:58:56 compute-0 sshd-session[27985]: pam_unix(sshd:session): session closed for user zuul
Oct 01 12:58:56 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 01 12:58:56 compute-0 systemd[1]: session-7.scope: Consumed 7.517s CPU time.
Oct 01 12:58:56 compute-0 systemd-logind[818]: Session 7 logged out. Waiting for processes to exit.
Oct 01 12:58:56 compute-0 systemd-logind[818]: Removed session 7.
Oct 01 12:59:05 compute-0 sshd-session[28377]: Invalid user saad from 156.236.31.46 port 42862
Oct 01 12:59:05 compute-0 sshd-session[28377]: Received disconnect from 156.236.31.46 port 42862:11: Bye Bye [preauth]
Oct 01 12:59:05 compute-0 sshd-session[28377]: Disconnected from invalid user saad 156.236.31.46 port 42862 [preauth]
Oct 01 12:59:17 compute-0 sshd-session[28380]: Accepted publickey for zuul from 192.168.122.30 port 60678 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 12:59:17 compute-0 systemd-logind[818]: New session 8 of user zuul.
Oct 01 12:59:17 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 01 12:59:17 compute-0 sshd-session[28380]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 12:59:18 compute-0 python3.9[28534]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 01 12:59:19 compute-0 python3.9[28708]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:59:19 compute-0 sudo[28858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxtjnavowvilfporxcmjitmpipueerii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323559.5233104-45-17641513882899/AnsiballZ_command.py'
Oct 01 12:59:19 compute-0 sudo[28858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:59:20 compute-0 python3.9[28860]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 12:59:20 compute-0 sudo[28858]: pam_unix(sudo:session): session closed for user root
Oct 01 12:59:20 compute-0 sshd-session[28379]: Invalid user seekcy from 202.103.55.158 port 53402
Oct 01 12:59:20 compute-0 sshd-session[28379]: Received disconnect from 202.103.55.158 port 53402:11: Bye Bye [preauth]
Oct 01 12:59:20 compute-0 sshd-session[28379]: Disconnected from invalid user seekcy 202.103.55.158 port 53402 [preauth]
Oct 01 12:59:20 compute-0 sudo[29011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxbkegfowurdlljtyeekwyfwrnrvwsnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323560.377655-57-199913991253798/AnsiballZ_stat.py'
Oct 01 12:59:20 compute-0 sudo[29011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:59:20 compute-0 python3.9[29013]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 12:59:21 compute-0 sudo[29011]: pam_unix(sudo:session): session closed for user root
Oct 01 12:59:21 compute-0 sudo[29163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzrsjpnxcfikonaxvagrkofkaqoushye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323561.1535099-65-87225180206088/AnsiballZ_file.py'
Oct 01 12:59:21 compute-0 sudo[29163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:59:21 compute-0 python3.9[29165]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:59:21 compute-0 sudo[29163]: pam_unix(sudo:session): session closed for user root
Oct 01 12:59:22 compute-0 sudo[29315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvjbzgnekhtwjkvmfbcklmhemlzygadm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323561.9151914-73-240031728466491/AnsiballZ_stat.py'
Oct 01 12:59:22 compute-0 sudo[29315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:59:22 compute-0 python3.9[29317]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 12:59:22 compute-0 sudo[29315]: pam_unix(sudo:session): session closed for user root
Oct 01 12:59:22 compute-0 sudo[29438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsfqgvqmtewqlabwfuiayiqnjamzpfxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323561.9151914-73-240031728466491/AnsiballZ_copy.py'
Oct 01 12:59:22 compute-0 sudo[29438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:59:23 compute-0 python3.9[29440]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323561.9151914-73-240031728466491/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:59:23 compute-0 sudo[29438]: pam_unix(sudo:session): session closed for user root
Oct 01 12:59:23 compute-0 sudo[29590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzdihmlutvqevialtglcjhfxizwdpag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323563.1541944-88-13381380146644/AnsiballZ_setup.py'
Oct 01 12:59:23 compute-0 sudo[29590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:59:23 compute-0 python3.9[29592]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:59:23 compute-0 sudo[29590]: pam_unix(sudo:session): session closed for user root
Oct 01 12:59:24 compute-0 sudo[29746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkquubiatdahmdvemgbldaielxczqhmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323564.0958517-96-126074038693433/AnsiballZ_file.py'
Oct 01 12:59:24 compute-0 sudo[29746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:59:24 compute-0 python3.9[29748]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 12:59:24 compute-0 sudo[29746]: pam_unix(sudo:session): session closed for user root
Oct 01 12:59:25 compute-0 python3.9[29898]: ansible-ansible.builtin.service_facts Invoked
Oct 01 12:59:30 compute-0 sshd-session[29983]: Invalid user giorgio from 45.249.247.86 port 40722
Oct 01 12:59:30 compute-0 sshd-session[29983]: Received disconnect from 45.249.247.86 port 40722:11: Bye Bye [preauth]
Oct 01 12:59:30 compute-0 sshd-session[29983]: Disconnected from invalid user giorgio 45.249.247.86 port 40722 [preauth]
Oct 01 12:59:30 compute-0 python3.9[30156]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 12:59:31 compute-0 python3.9[30306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:59:32 compute-0 python3.9[30460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 12:59:33 compute-0 sudo[30616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzscqrfkdprloytclhutqwvkujrszyok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323572.9597669-144-106117891552994/AnsiballZ_setup.py'
Oct 01 12:59:33 compute-0 sudo[30616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:59:33 compute-0 python3.9[30618]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 12:59:33 compute-0 sudo[30616]: pam_unix(sudo:session): session closed for user root
Oct 01 12:59:34 compute-0 sudo[30700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnhimzmqdlabjwtuziysxaxatmewpewj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323572.9597669-144-106117891552994/AnsiballZ_dnf.py'
Oct 01 12:59:34 compute-0 sudo[30700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 12:59:34 compute-0 python3.9[30702]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 12:59:43 compute-0 sshd-session[30771]: Invalid user user from 80.253.31.232 port 43552
Oct 01 12:59:44 compute-0 sshd-session[30771]: Received disconnect from 80.253.31.232 port 43552:11: Bye Bye [preauth]
Oct 01 12:59:44 compute-0 sshd-session[30771]: Disconnected from invalid user user 80.253.31.232 port 43552 [preauth]
Oct 01 12:59:52 compute-0 sshd-session[30821]: Invalid user ubuntu from 200.7.101.139 port 52466
Oct 01 12:59:52 compute-0 sshd-session[30821]: Received disconnect from 200.7.101.139 port 52466:11: Bye Bye [preauth]
Oct 01 12:59:52 compute-0 sshd-session[30821]: Disconnected from invalid user ubuntu 200.7.101.139 port 52466 [preauth]
Oct 01 13:00:04 compute-0 sshd-session[30851]: Invalid user malik from 156.236.31.46 port 42948
Oct 01 13:00:04 compute-0 sshd-session[30851]: Received disconnect from 156.236.31.46 port 42948:11: Bye Bye [preauth]
Oct 01 13:00:04 compute-0 sshd-session[30851]: Disconnected from invalid user malik 156.236.31.46 port 42948 [preauth]
Oct 01 13:00:15 compute-0 systemd[1]: Reloading.
Oct 01 13:00:15 compute-0 systemd-rc-local-generator[30906]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:00:16 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 01 13:00:16 compute-0 systemd[1]: Reloading.
Oct 01 13:00:16 compute-0 systemd-rc-local-generator[30945]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:00:16 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 01 13:00:16 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 01 13:00:16 compute-0 systemd[1]: Reloading.
Oct 01 13:00:16 compute-0 systemd-rc-local-generator[30983]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:00:16 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 01 13:00:17 compute-0 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct 01 13:00:17 compute-0 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct 01 13:00:41 compute-0 sshd-session[31080]: Received disconnect from 27.254.137.144 port 41986:11: Bye Bye [preauth]
Oct 01 13:00:41 compute-0 sshd-session[31080]: Disconnected from authenticating user root 27.254.137.144 port 41986 [preauth]
Oct 01 13:00:44 compute-0 sshd-session[31097]: Invalid user ubuntu from 80.253.31.232 port 54264
Oct 01 13:00:44 compute-0 sshd-session[31097]: Received disconnect from 80.253.31.232 port 54264:11: Bye Bye [preauth]
Oct 01 13:00:44 compute-0 sshd-session[31097]: Disconnected from invalid user ubuntu 80.253.31.232 port 54264 [preauth]
Oct 01 13:01:01 compute-0 CROND[31170]: (root) CMD (run-parts /etc/cron.hourly)
Oct 01 13:01:01 compute-0 run-parts[31173]: (/etc/cron.hourly) starting 0anacron
Oct 01 13:01:01 compute-0 run-parts[31179]: (/etc/cron.hourly) finished 0anacron
Oct 01 13:01:01 compute-0 CROND[31169]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 01 13:01:05 compute-0 sshd-session[31180]: Received disconnect from 156.236.31.46 port 43034:11: Bye Bye [preauth]
Oct 01 13:01:05 compute-0 sshd-session[31180]: Disconnected from authenticating user root 156.236.31.46 port 43034 [preauth]
Oct 01 13:01:06 compute-0 sshd-session[31182]: Invalid user test_user from 200.7.101.139 port 45968
Oct 01 13:01:06 compute-0 sshd-session[31182]: Received disconnect from 200.7.101.139 port 45968:11: Bye Bye [preauth]
Oct 01 13:01:06 compute-0 sshd-session[31182]: Disconnected from invalid user test_user 200.7.101.139 port 45968 [preauth]
Oct 01 13:01:20 compute-0 kernel: SELinux:  Converting 2714 SID table entries...
Oct 01 13:01:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 13:01:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 13:01:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 13:01:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 13:01:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 13:01:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 13:01:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 13:01:21 compute-0 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 01 13:01:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 13:01:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 13:01:21 compute-0 systemd[1]: Reloading.
Oct 01 13:01:21 compute-0 systemd-rc-local-generator[31323]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:01:21 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 13:01:21 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 01 13:01:21 compute-0 PackageKit[31618]: daemon start
Oct 01 13:01:21 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 01 13:01:22 compute-0 sudo[30700]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 13:01:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 13:01:22 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.131s CPU time.
Oct 01 13:01:22 compute-0 systemd[1]: run-r2f79dcdcc1d24daf8a6368bd19999ca0.service: Deactivated successfully.
Oct 01 13:01:22 compute-0 sudo[32239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clsoprbozalslofdjicmdrbnfpbbbmas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323682.2552257-156-157347835398419/AnsiballZ_command.py'
Oct 01 13:01:22 compute-0 sudo[32239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:22 compute-0 python3.9[32241]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:01:23 compute-0 sudo[32239]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:24 compute-0 sudo[32520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzdkwmajkssbetqcsupkjheqztzrtwds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323683.8905404-164-203086268195388/AnsiballZ_selinux.py'
Oct 01 13:01:24 compute-0 sudo[32520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:24 compute-0 python3.9[32522]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 01 13:01:24 compute-0 sudo[32520]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:25 compute-0 sudo[32672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwltsqiiwrmilfoxjdwdkklmllvfefqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323685.0858848-175-236871457864908/AnsiballZ_command.py'
Oct 01 13:01:25 compute-0 sudo[32672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:25 compute-0 python3.9[32674]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 01 13:01:26 compute-0 sudo[32672]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:26 compute-0 sudo[32825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heafolktrvkzdpegjqlyfkjopdjfkmts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323686.6711388-183-271954901911804/AnsiballZ_file.py'
Oct 01 13:01:26 compute-0 sudo[32825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:28 compute-0 python3.9[32827]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:01:28 compute-0 sudo[32825]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:28 compute-0 sudo[32977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stwzvjfcukmtivnvmigbxlhhxlsmtawm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323688.3161395-191-115023690391534/AnsiballZ_mount.py'
Oct 01 13:01:28 compute-0 sudo[32977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:29 compute-0 python3.9[32979]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 01 13:01:29 compute-0 sudo[32977]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:30 compute-0 sudo[33129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlisrmsoemqqzklltdssqhhsnkopceof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323689.8574407-219-119001097427655/AnsiballZ_file.py'
Oct 01 13:01:30 compute-0 sudo[33129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:30 compute-0 python3.9[33131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:01:30 compute-0 sudo[33129]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:30 compute-0 sudo[33281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmytmritjiypftovhjmdyaqktggpvwei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323690.51836-227-57260806918456/AnsiballZ_stat.py'
Oct 01 13:01:30 compute-0 sudo[33281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:31 compute-0 python3.9[33283]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:01:31 compute-0 sudo[33281]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:31 compute-0 sudo[33404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqiqktzyoqqyipstkcumpwoqpuhhbpgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323690.51836-227-57260806918456/AnsiballZ_copy.py'
Oct 01 13:01:31 compute-0 sudo[33404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:31 compute-0 python3.9[33406]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323690.51836-227-57260806918456/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:01:31 compute-0 sudo[33404]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:32 compute-0 sudo[33556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwufguyarsjqanzqnqlfynjbxvghbtra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323692.0265803-254-56006832617389/AnsiballZ_getent.py'
Oct 01 13:01:32 compute-0 sudo[33556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:34 compute-0 python3.9[33558]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 01 13:01:34 compute-0 sudo[33556]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:35 compute-0 sudo[33709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfhycwlocrgghknjhwnbzpbckbynegrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323695.1335196-262-227898026187736/AnsiballZ_group.py'
Oct 01 13:01:35 compute-0 sudo[33709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:35 compute-0 python3.9[33712]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 13:01:35 compute-0 groupadd[33713]: group added to /etc/group: name=qemu, GID=107
Oct 01 13:01:35 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:01:35 compute-0 groupadd[33713]: group added to /etc/gshadow: name=qemu
Oct 01 13:01:35 compute-0 groupadd[33713]: new group: name=qemu, GID=107
Oct 01 13:01:35 compute-0 sudo[33709]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:36 compute-0 sudo[33869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgdqcizvtgahrhzoqlzrispxhmjlhicx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323695.9950197-270-56831509728267/AnsiballZ_user.py'
Oct 01 13:01:36 compute-0 sudo[33869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:36 compute-0 python3.9[33871]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 01 13:01:36 compute-0 useradd[33873]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 01 13:01:36 compute-0 sudo[33869]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:37 compute-0 sudo[34029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxkjudmyytwtkbrougrvdkkbhckkaubp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323697.0404136-278-165882575014038/AnsiballZ_getent.py'
Oct 01 13:01:37 compute-0 sudo[34029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:37 compute-0 python3.9[34031]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 01 13:01:37 compute-0 sudo[34029]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:38 compute-0 sudo[34182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gshtekmoeivqqgupghqciwdqzhwthqsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323697.8322334-286-79295322187673/AnsiballZ_group.py'
Oct 01 13:01:38 compute-0 sudo[34182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:38 compute-0 python3.9[34184]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 13:01:38 compute-0 groupadd[34185]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 01 13:01:38 compute-0 groupadd[34185]: group added to /etc/gshadow: name=hugetlbfs
Oct 01 13:01:38 compute-0 groupadd[34185]: new group: name=hugetlbfs, GID=42477
Oct 01 13:01:38 compute-0 sudo[34182]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:38 compute-0 sudo[34340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgtrutzkmmgdmybatsipfvghufmwtasc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323698.6367278-295-128478490327180/AnsiballZ_file.py'
Oct 01 13:01:38 compute-0 sudo[34340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:39 compute-0 python3.9[34342]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 01 13:01:39 compute-0 sudo[34340]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:39 compute-0 sudo[34492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcakkwecefqyxixlubqnlgcetugiokkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323699.547106-306-123648055837856/AnsiballZ_dnf.py'
Oct 01 13:01:39 compute-0 sudo[34492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:40 compute-0 python3.9[34494]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:01:41 compute-0 sudo[34492]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:42 compute-0 sudo[34645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chawxqbbyhxywfigdkxrnejmnewcqeed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323701.7597072-314-38072838717074/AnsiballZ_file.py'
Oct 01 13:01:42 compute-0 sudo[34645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:42 compute-0 python3.9[34647]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:01:42 compute-0 sudo[34645]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:42 compute-0 sudo[34797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnrydnnvcdgtqudstpcjhsessenwvmcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323702.4389317-322-104338917592993/AnsiballZ_stat.py'
Oct 01 13:01:42 compute-0 sudo[34797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:42 compute-0 python3.9[34799]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:01:42 compute-0 sudo[34797]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:43 compute-0 sudo[34920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjccfddixrcagyjfazndgcdftafuzgmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323702.4389317-322-104338917592993/AnsiballZ_copy.py'
Oct 01 13:01:43 compute-0 sudo[34920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:43 compute-0 python3.9[34922]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323702.4389317-322-104338917592993/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:01:43 compute-0 sudo[34920]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:44 compute-0 sudo[35074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diysgyqcjxurbtupbzdaswvmmatrmqet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323703.6243663-337-209411651560664/AnsiballZ_systemd.py'
Oct 01 13:01:44 compute-0 sudo[35074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:44 compute-0 sshd-session[34996]: Invalid user lch from 80.253.31.232 port 42838
Oct 01 13:01:44 compute-0 sshd-session[34996]: Received disconnect from 80.253.31.232 port 42838:11: Bye Bye [preauth]
Oct 01 13:01:44 compute-0 sshd-session[34996]: Disconnected from invalid user lch 80.253.31.232 port 42838 [preauth]
Oct 01 13:01:44 compute-0 python3.9[35076]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:01:44 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 01 13:01:44 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 01 13:01:44 compute-0 kernel: Bridge firewalling registered
Oct 01 13:01:44 compute-0 systemd-modules-load[35080]: Inserted module 'br_netfilter'
Oct 01 13:01:44 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 01 13:01:44 compute-0 sudo[35074]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:45 compute-0 sudo[35233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzqifkijzgrdoibxkviveeslkebluact ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323704.8960533-345-37116191046091/AnsiballZ_stat.py'
Oct 01 13:01:45 compute-0 sudo[35233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:45 compute-0 python3.9[35235]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:01:45 compute-0 sudo[35233]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:45 compute-0 sudo[35356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrcdfmvwqgiidachvssgfbyzjcatnerg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323704.8960533-345-37116191046091/AnsiballZ_copy.py'
Oct 01 13:01:45 compute-0 sudo[35356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:45 compute-0 python3.9[35358]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323704.8960533-345-37116191046091/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:01:45 compute-0 sudo[35356]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:46 compute-0 sudo[35508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jorblqhlscozgzttydvnznloecxsldez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323706.1654878-363-209728499417354/AnsiballZ_dnf.py'
Oct 01 13:01:46 compute-0 sudo[35508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:46 compute-0 python3.9[35510]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:01:50 compute-0 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct 01 13:01:50 compute-0 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct 01 13:01:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 13:01:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 13:01:50 compute-0 systemd[1]: Reloading.
Oct 01 13:01:50 compute-0 systemd-rc-local-generator[35569]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:01:50 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 13:01:51 compute-0 sudo[35508]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:52 compute-0 python3.9[37067]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:01:53 compute-0 python3.9[38016]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 01 13:01:53 compute-0 python3.9[38781]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:01:54 compute-0 sudo[39614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gludaegedhxvbzepkjchjftaimgnllxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323713.8188567-402-125777149563397/AnsiballZ_command.py'
Oct 01 13:01:54 compute-0 sudo[39614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:54 compute-0 python3.9[39637]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:01:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 13:01:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 13:01:54 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.385s CPU time.
Oct 01 13:01:54 compute-0 systemd[1]: run-r3be6f599e91444c8abb7ea88fc75c5d1.service: Deactivated successfully.
Oct 01 13:01:54 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 01 13:01:54 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 01 13:01:54 compute-0 sudo[39614]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:55 compute-0 sudo[40054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ythdcvsdrumjawymkejoapbodsmsjvxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323715.0629263-411-56677307617624/AnsiballZ_systemd.py'
Oct 01 13:01:55 compute-0 sudo[40054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:55 compute-0 python3.9[40056]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:01:55 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 01 13:01:55 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 01 13:01:55 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 01 13:01:55 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 01 13:01:55 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 01 13:01:55 compute-0 sudo[40054]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:56 compute-0 python3.9[40220]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 01 13:01:56 compute-0 sshd-session[40057]: Invalid user ryan from 27.254.137.144 port 37574
Oct 01 13:01:57 compute-0 sshd-session[40057]: Received disconnect from 27.254.137.144 port 37574:11: Bye Bye [preauth]
Oct 01 13:01:57 compute-0 sshd-session[40057]: Disconnected from invalid user ryan 27.254.137.144 port 37574 [preauth]
Oct 01 13:01:58 compute-0 sudo[40370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkudbfljgkemaftbqhrlfggkgxqgiwnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323718.3229823-468-241201518603982/AnsiballZ_systemd.py'
Oct 01 13:01:58 compute-0 sudo[40370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:01:59 compute-0 python3.9[40372]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:01:59 compute-0 systemd[1]: Reloading.
Oct 01 13:01:59 compute-0 systemd-rc-local-generator[40399]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:01:59 compute-0 sudo[40370]: pam_unix(sudo:session): session closed for user root
Oct 01 13:01:59 compute-0 sudo[40559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrkqklojejoodxkprzncbezyhmizxkju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323719.4479005-468-98016608268674/AnsiballZ_systemd.py'
Oct 01 13:01:59 compute-0 sudo[40559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:00 compute-0 python3.9[40561]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:02:00 compute-0 systemd[1]: Reloading.
Oct 01 13:02:00 compute-0 systemd-rc-local-generator[40588]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:02:00 compute-0 sudo[40559]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:00 compute-0 sudo[40748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sirpgdvaktwpdpqlimlpnvorwjiouajv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323720.5327463-484-39313649759279/AnsiballZ_command.py'
Oct 01 13:02:00 compute-0 sudo[40748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:01 compute-0 python3.9[40750]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:02:01 compute-0 sudo[40748]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:01 compute-0 sudo[40901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eymoyacfuifehxbkhfkqqoexrlfeatxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323721.3050675-492-59541152525463/AnsiballZ_command.py'
Oct 01 13:02:01 compute-0 sudo[40901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:01 compute-0 python3.9[40903]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:02:01 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 01 13:02:01 compute-0 sudo[40901]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:02 compute-0 sudo[41055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzptytaucpttxvxszcxmvdkakufwpbdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323722.075603-500-241975285839700/AnsiballZ_command.py'
Oct 01 13:02:02 compute-0 sudo[41055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:02 compute-0 python3.9[41057]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:02:03 compute-0 sudo[41055]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:04 compute-0 sudo[41217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocdtzqlvnggeagrmcaskhbwucorlvdvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323724.1324613-508-7572751166182/AnsiballZ_command.py'
Oct 01 13:02:04 compute-0 sudo[41217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:04 compute-0 python3.9[41219]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:02:04 compute-0 sudo[41217]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:05 compute-0 sudo[41370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmwuodzlhjotknvjspmisjejslaltwnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323724.8754265-516-257614100761873/AnsiballZ_systemd.py'
Oct 01 13:02:05 compute-0 sudo[41370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:05 compute-0 python3.9[41372]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:02:05 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 01 13:02:05 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Oct 01 13:02:05 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Oct 01 13:02:05 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 01 13:02:05 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 01 13:02:05 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 01 13:02:05 compute-0 sudo[41370]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:05 compute-0 sshd-session[28383]: Connection closed by 192.168.122.30 port 60678
Oct 01 13:02:05 compute-0 sshd-session[28380]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:02:05 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 01 13:02:05 compute-0 systemd[1]: session-8.scope: Consumed 2min 9.398s CPU time.
Oct 01 13:02:05 compute-0 systemd-logind[818]: Session 8 logged out. Waiting for processes to exit.
Oct 01 13:02:05 compute-0 systemd-logind[818]: Removed session 8.
Oct 01 13:02:06 compute-0 sshd-session[41403]: Invalid user test from 156.236.31.46 port 43130
Oct 01 13:02:06 compute-0 sshd-session[41403]: Received disconnect from 156.236.31.46 port 43130:11: Bye Bye [preauth]
Oct 01 13:02:06 compute-0 sshd-session[41403]: Disconnected from invalid user test 156.236.31.46 port 43130 [preauth]
Oct 01 13:02:11 compute-0 sshd-session[41405]: Accepted publickey for zuul from 192.168.122.30 port 34550 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:02:11 compute-0 systemd-logind[818]: New session 9 of user zuul.
Oct 01 13:02:11 compute-0 systemd[1]: Started Session 9 of User zuul.
Oct 01 13:02:11 compute-0 sshd-session[41405]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:02:12 compute-0 python3.9[41558]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:02:13 compute-0 sudo[41712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygrksqosbumulqtompsenmqivymdpwyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323732.6397696-36-190429920990003/AnsiballZ_getent.py'
Oct 01 13:02:13 compute-0 sudo[41712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:13 compute-0 python3.9[41714]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 01 13:02:13 compute-0 sudo[41712]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:14 compute-0 sudo[41865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxdryflreqecqwvrplmleknonlttruix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323733.5525625-44-253162771278456/AnsiballZ_group.py'
Oct 01 13:02:14 compute-0 sudo[41865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:14 compute-0 python3.9[41867]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 13:02:14 compute-0 groupadd[41868]: group added to /etc/group: name=openvswitch, GID=42476
Oct 01 13:02:14 compute-0 groupadd[41868]: group added to /etc/gshadow: name=openvswitch
Oct 01 13:02:14 compute-0 groupadd[41868]: new group: name=openvswitch, GID=42476
Oct 01 13:02:14 compute-0 sudo[41865]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:15 compute-0 sudo[42023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etsuoyeafwtcbttqwhqomygodjqmchsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323734.5263293-52-182590815655640/AnsiballZ_user.py'
Oct 01 13:02:15 compute-0 sudo[42023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:15 compute-0 python3.9[42025]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 01 13:02:15 compute-0 useradd[42027]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 01 13:02:15 compute-0 useradd[42027]: add 'openvswitch' to group 'hugetlbfs'
Oct 01 13:02:15 compute-0 useradd[42027]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 01 13:02:15 compute-0 sudo[42023]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:15 compute-0 sudo[42183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbpeuluoyletcfvqrjkksvlnkoqmgofm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323735.6260955-62-212424401393835/AnsiballZ_setup.py'
Oct 01 13:02:15 compute-0 sudo[42183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:16 compute-0 python3.9[42185]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:02:16 compute-0 sudo[42183]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:16 compute-0 sudo[42267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtodaqkwpeydhhmiwtebpjwrjbocfpqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323735.6260955-62-212424401393835/AnsiballZ_dnf.py'
Oct 01 13:02:16 compute-0 sudo[42267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:17 compute-0 python3.9[42269]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 01 13:02:19 compute-0 sudo[42267]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:19 compute-0 sudo[42430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqqfnoecjwltccllchstalvbxinbuudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323739.4398246-76-268111287428036/AnsiballZ_dnf.py'
Oct 01 13:02:19 compute-0 sudo[42430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:19 compute-0 python3.9[42432]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:02:24 compute-0 sshd-session[42447]: Invalid user himanshu from 200.7.101.139 port 50148
Oct 01 13:02:24 compute-0 sshd-session[42447]: Received disconnect from 200.7.101.139 port 50148:11: Bye Bye [preauth]
Oct 01 13:02:24 compute-0 sshd-session[42447]: Disconnected from invalid user himanshu 200.7.101.139 port 50148 [preauth]
Oct 01 13:02:30 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Oct 01 13:02:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 13:02:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 13:02:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 13:02:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 13:02:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 13:02:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 13:02:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 13:02:30 compute-0 groupadd[42457]: group added to /etc/group: name=unbound, GID=993
Oct 01 13:02:30 compute-0 groupadd[42457]: group added to /etc/gshadow: name=unbound
Oct 01 13:02:30 compute-0 groupadd[42457]: new group: name=unbound, GID=993
Oct 01 13:02:30 compute-0 useradd[42464]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 01 13:02:31 compute-0 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 01 13:02:31 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 01 13:02:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 13:02:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 13:02:32 compute-0 systemd[1]: Reloading.
Oct 01 13:02:32 compute-0 systemd-rc-local-generator[42957]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:02:32 compute-0 systemd-sysv-generator[42963]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:02:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 13:02:33 compute-0 sudo[42430]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 13:02:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 13:02:33 compute-0 systemd[1]: run-r55f3e64d89c948958c366809268950d5.service: Deactivated successfully.
Oct 01 13:02:34 compute-0 sudo[43534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhiffvlnmphwckdfqkdrkhihieitwabs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323753.5517528-84-180339313740968/AnsiballZ_systemd.py'
Oct 01 13:02:34 compute-0 sudo[43534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:34 compute-0 python3.9[43536]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 13:02:34 compute-0 systemd[1]: Reloading.
Oct 01 13:02:34 compute-0 systemd-rc-local-generator[43566]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:02:34 compute-0 systemd-sysv-generator[43569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:02:34 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 01 13:02:34 compute-0 chown[43577]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 01 13:02:34 compute-0 ovs-ctl[43582]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 01 13:02:34 compute-0 ovs-ctl[43582]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 01 13:02:35 compute-0 ovs-ctl[43582]: Starting ovsdb-server [  OK  ]
Oct 01 13:02:35 compute-0 ovs-vsctl[43631]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 01 13:02:35 compute-0 ovs-vsctl[43650]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"7280030e-2ba6-406c-9fae-f8284a927c47\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 01 13:02:35 compute-0 ovs-ctl[43582]: Configuring Open vSwitch system IDs [  OK  ]
Oct 01 13:02:35 compute-0 ovs-ctl[43582]: Enabling remote OVSDB managers [  OK  ]
Oct 01 13:02:35 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 01 13:02:35 compute-0 ovs-vsctl[43656]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 01 13:02:35 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 01 13:02:35 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 01 13:02:35 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 01 13:02:35 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 01 13:02:35 compute-0 ovs-ctl[43700]: Inserting openvswitch module [  OK  ]
Oct 01 13:02:35 compute-0 ovs-ctl[43669]: Starting ovs-vswitchd [  OK  ]
Oct 01 13:02:35 compute-0 ovs-vsctl[43718]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 01 13:02:35 compute-0 ovs-ctl[43669]: Enabling remote OVSDB managers [  OK  ]
Oct 01 13:02:35 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 01 13:02:35 compute-0 systemd[1]: Starting Open vSwitch...
Oct 01 13:02:35 compute-0 systemd[1]: Finished Open vSwitch.
Oct 01 13:02:35 compute-0 sudo[43534]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:36 compute-0 python3.9[43870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:02:37 compute-0 sudo[44020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywhlzyvgeyvcoqyrpbcheizprdhmgcxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323756.6386514-102-57491572393123/AnsiballZ_sefcontext.py'
Oct 01 13:02:37 compute-0 sudo[44020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:37 compute-0 python3.9[44022]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 01 13:02:38 compute-0 kernel: SELinux:  Converting 2738 SID table entries...
Oct 01 13:02:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 13:02:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 13:02:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 13:02:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 13:02:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 13:02:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 13:02:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 13:02:38 compute-0 sudo[44020]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:39 compute-0 python3.9[44181]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:02:39 compute-0 sshd-session[44132]: Invalid user naveen from 80.253.31.232 port 38660
Oct 01 13:02:39 compute-0 sshd-session[44132]: Received disconnect from 80.253.31.232 port 38660:11: Bye Bye [preauth]
Oct 01 13:02:39 compute-0 sshd-session[44132]: Disconnected from invalid user naveen 80.253.31.232 port 38660 [preauth]
Oct 01 13:02:40 compute-0 sudo[44337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqfieyowtijajcczzbjmujiskspknxex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323759.8707628-120-209756695189739/AnsiballZ_dnf.py'
Oct 01 13:02:40 compute-0 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 01 13:02:40 compute-0 sudo[44337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:40 compute-0 sshd-session[44104]: Received disconnect from 45.249.247.86 port 44436:11: Bye Bye [preauth]
Oct 01 13:02:40 compute-0 sshd-session[44104]: Disconnected from authenticating user root 45.249.247.86 port 44436 [preauth]
Oct 01 13:02:40 compute-0 python3.9[44339]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:02:41 compute-0 sudo[44337]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:41 compute-0 sudo[44490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swtfaqgyepslxglkehxxefblosakytqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323761.624927-128-143147287327752/AnsiballZ_command.py'
Oct 01 13:02:41 compute-0 sudo[44490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:42 compute-0 python3.9[44492]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:02:42 compute-0 sudo[44490]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:43 compute-0 sudo[44777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbdxfizodeyguyninhybmjvdwpvmmbkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323763.1079862-136-54181621475931/AnsiballZ_file.py'
Oct 01 13:02:43 compute-0 sudo[44777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:43 compute-0 python3.9[44779]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 13:02:43 compute-0 sudo[44777]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:44 compute-0 python3.9[44929]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:02:45 compute-0 sudo[45081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxuenjwllvjebpdfwwnjtxhmfevezvlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323764.7795439-152-195079371395328/AnsiballZ_dnf.py'
Oct 01 13:02:45 compute-0 sudo[45081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:45 compute-0 python3.9[45083]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:02:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 13:02:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 13:02:47 compute-0 systemd[1]: Reloading.
Oct 01 13:02:47 compute-0 systemd-rc-local-generator[45117]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:02:47 compute-0 systemd-sysv-generator[45125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:02:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 13:02:47 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 13:02:47 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 13:02:47 compute-0 systemd[1]: run-r191bc22dc7f04e12a3a0318a0a9d1d33.service: Deactivated successfully.
Oct 01 13:02:47 compute-0 sudo[45081]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:48 compute-0 sudo[45398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvuujudhnxwlobhgyovvkijcevlgpeol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323767.843424-160-126409228063529/AnsiballZ_systemd.py'
Oct 01 13:02:48 compute-0 sudo[45398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:48 compute-0 python3.9[45400]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:02:48 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 01 13:02:48 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Oct 01 13:02:48 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Oct 01 13:02:48 compute-0 systemd[1]: Stopping Network Manager...
Oct 01 13:02:48 compute-0 NetworkManager[4330]: <info>  [1759323768.5105] caught SIGTERM, shutting down normally.
Oct 01 13:02:48 compute-0 NetworkManager[4330]: <info>  [1759323768.5133] dhcp4 (eth0): canceled DHCP transaction
Oct 01 13:02:48 compute-0 NetworkManager[4330]: <info>  [1759323768.5134] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 13:02:48 compute-0 NetworkManager[4330]: <info>  [1759323768.5134] dhcp4 (eth0): state changed no lease
Oct 01 13:02:48 compute-0 NetworkManager[4330]: <info>  [1759323768.5139] manager: NetworkManager state is now CONNECTED_SITE
Oct 01 13:02:48 compute-0 NetworkManager[4330]: <info>  [1759323768.5239] exiting (success)
Oct 01 13:02:48 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 13:02:48 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 13:02:48 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 01 13:02:48 compute-0 systemd[1]: Stopped Network Manager.
Oct 01 13:02:48 compute-0 systemd[1]: NetworkManager.service: Consumed 9.698s CPU time, 4.1M memory peak, read 0B from disk, written 22.5K to disk.
Oct 01 13:02:48 compute-0 systemd[1]: Starting Network Manager...
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.5894] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:59648e32-2da2-4a47-989c-dbddfc6922f6)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.5897] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.5958] manager[0x55e97e42a090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 01 13:02:48 compute-0 systemd[1]: Starting Hostname Service...
Oct 01 13:02:48 compute-0 systemd[1]: Started Hostname Service.
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6704] hostname: hostname: using hostnamed
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6708] hostname: static hostname changed from (none) to "compute-0"
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6715] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6721] manager[0x55e97e42a090]: rfkill: Wi-Fi hardware radio set enabled
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6722] manager[0x55e97e42a090]: rfkill: WWAN hardware radio set enabled
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6755] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6771] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6772] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6773] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6774] manager: Networking is enabled by state file
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6777] settings: Loaded settings plugin: keyfile (internal)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6783] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6822] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6836] dhcp: init: Using DHCP client 'internal'
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6840] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6851] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6860] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6874] device (lo): Activation: starting connection 'lo' (71a0a298-c086-43ce-b223-7fae93260bdf)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6887] device (eth0): carrier: link connected
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6895] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6905] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6905] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6918] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6930] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6941] device (eth1): carrier: link connected
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6948] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6955] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c) (indicated)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6956] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6964] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6974] device (eth1): Activation: starting connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c)
Oct 01 13:02:48 compute-0 systemd[1]: Started Network Manager.
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6983] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6996] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.6999] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7002] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7007] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7012] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7014] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7018] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7024] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7032] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7035] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7058] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7075] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7085] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7089] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7095] device (lo): Activation: successful, device activated.
Oct 01 13:02:48 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7101] dhcp4 (eth0): state changed new lease, address=38.102.83.245
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7108] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7171] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7175] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7181] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7184] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7188] device (eth1): Activation: successful, device activated.
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7200] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7202] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7205] manager: NetworkManager state is now CONNECTED_SITE
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7208] device (eth0): Activation: successful, device activated.
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7213] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 01 13:02:48 compute-0 NetworkManager[45411]: <info>  [1759323768.7215] manager: startup complete
Oct 01 13:02:48 compute-0 sudo[45398]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:48 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 01 13:02:49 compute-0 sudo[45624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adkhcnlnfkwamrduwufgnoaqvqvgedou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323769.0251923-168-129807333992556/AnsiballZ_dnf.py'
Oct 01 13:02:49 compute-0 sudo[45624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:49 compute-0 python3.9[45626]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:02:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 13:02:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 13:02:54 compute-0 systemd[1]: Reloading.
Oct 01 13:02:54 compute-0 systemd-rc-local-generator[45679]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:02:54 compute-0 systemd-sysv-generator[45682]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:02:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 13:02:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 13:02:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 13:02:55 compute-0 systemd[1]: run-r4393e0c1255343478f8d0f9fd380e944.service: Deactivated successfully.
Oct 01 13:02:55 compute-0 sudo[45624]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:56 compute-0 sudo[46087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjuivsxbajmagshcdmkoejjvwvulpsjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323775.9659975-180-202815178859792/AnsiballZ_stat.py'
Oct 01 13:02:56 compute-0 sudo[46087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:56 compute-0 python3.9[46089]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:02:56 compute-0 sudo[46087]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:56 compute-0 sudo[46239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqdlptkzriygjemncwwwqzxkhatmfhxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323776.5604014-189-28270499806873/AnsiballZ_ini_file.py'
Oct 01 13:02:56 compute-0 sudo[46239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:57 compute-0 python3.9[46241]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:02:57 compute-0 sudo[46239]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:57 compute-0 sudo[46393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbkmhhtpghmxyqbwqroqubhlyzrmimxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323777.43178-199-109210830391006/AnsiballZ_ini_file.py'
Oct 01 13:02:57 compute-0 sudo[46393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:57 compute-0 python3.9[46395]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:02:57 compute-0 sudo[46393]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:58 compute-0 sudo[46545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhnhqixasfpmvichjacsmegiesldsmbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323778.0646589-199-30449475152320/AnsiballZ_ini_file.py'
Oct 01 13:02:58 compute-0 sudo[46545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:58 compute-0 python3.9[46547]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:02:58 compute-0 sudo[46545]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:58 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 13:02:58 compute-0 sudo[46697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkroovnumxgffhwclbsozlpnqcjulgay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323778.7105207-214-172633413334008/AnsiballZ_ini_file.py'
Oct 01 13:02:58 compute-0 sudo[46697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:59 compute-0 python3.9[46699]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:02:59 compute-0 sudo[46697]: pam_unix(sudo:session): session closed for user root
Oct 01 13:02:59 compute-0 sudo[46849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyqnyunodbzzuywzgaanipcrdzcpawfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323779.3974404-214-58338402999920/AnsiballZ_ini_file.py'
Oct 01 13:02:59 compute-0 sudo[46849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:02:59 compute-0 python3.9[46851]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:02:59 compute-0 sudo[46849]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:00 compute-0 sudo[47001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnqcexztgiurzcwsrihkenjzgfemdgji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323780.0926003-229-33731058889208/AnsiballZ_stat.py'
Oct 01 13:03:00 compute-0 sudo[47001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:00 compute-0 python3.9[47003]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:03:00 compute-0 sudo[47001]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:01 compute-0 sudo[47124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxvfurpvswvbjwkodsbjaguadxbvutrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323780.0926003-229-33731058889208/AnsiballZ_copy.py'
Oct 01 13:03:01 compute-0 sudo[47124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:01 compute-0 python3.9[47126]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323780.0926003-229-33731058889208/.source _original_basename=.kz3ajbh2 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:03:01 compute-0 sudo[47124]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:01 compute-0 sudo[47276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqoensztdfhhcwyxvuipeelknsbotbww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323781.5492404-244-73982101367586/AnsiballZ_file.py'
Oct 01 13:03:01 compute-0 sudo[47276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:02 compute-0 python3.9[47278]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:03:02 compute-0 sudo[47276]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:02 compute-0 sudo[47428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxtqlwyanlxehqjyrkridrisoyxsbhzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323782.3250663-252-133603098850660/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 01 13:03:02 compute-0 sudo[47428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:03 compute-0 python3.9[47430]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 01 13:03:03 compute-0 sudo[47428]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:03 compute-0 sudo[47580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdtohypdncsdnmakazxhlecgcmmmihxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323783.4359457-261-241777612966557/AnsiballZ_file.py'
Oct 01 13:03:03 compute-0 sudo[47580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:03 compute-0 python3.9[47582]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:03:04 compute-0 sudo[47580]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:04 compute-0 sudo[47732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hslwwhmfsvkzvhgloxkjxtwzocovnhyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323784.294579-271-107484786926154/AnsiballZ_stat.py'
Oct 01 13:03:04 compute-0 sudo[47732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:04 compute-0 sudo[47732]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:05 compute-0 sudo[47855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znqzojpzylggjrizdxfyjvuxhqwxncca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323784.294579-271-107484786926154/AnsiballZ_copy.py'
Oct 01 13:03:05 compute-0 sudo[47855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:05 compute-0 sudo[47855]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:06 compute-0 sudo[48007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gglvchmxuloyqujclhmifbrqvxrsezzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323785.56245-286-42895746380790/AnsiballZ_slurp.py'
Oct 01 13:03:06 compute-0 sudo[48007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:06 compute-0 python3.9[48009]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 01 13:03:06 compute-0 sudo[48007]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:07 compute-0 sudo[48182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pceddpajbzmreaplajshpvqeirhjwoyv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323786.5309014-295-31303534030536/async_wrapper.py j566808765024 300 /home/zuul/.ansible/tmp/ansible-tmp-1759323786.5309014-295-31303534030536/AnsiballZ_edpm_os_net_config.py _'
Oct 01 13:03:07 compute-0 sudo[48182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:07 compute-0 ansible-async_wrapper.py[48184]: Invoked with j566808765024 300 /home/zuul/.ansible/tmp/ansible-tmp-1759323786.5309014-295-31303534030536/AnsiballZ_edpm_os_net_config.py _
Oct 01 13:03:07 compute-0 ansible-async_wrapper.py[48187]: Starting module and watcher
Oct 01 13:03:07 compute-0 ansible-async_wrapper.py[48187]: Start watching 48188 (300)
Oct 01 13:03:07 compute-0 ansible-async_wrapper.py[48188]: Start module (48188)
Oct 01 13:03:07 compute-0 ansible-async_wrapper.py[48184]: Return async_wrapper task started.
Oct 01 13:03:07 compute-0 sudo[48182]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:07 compute-0 python3.9[48189]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 01 13:03:08 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 01 13:03:08 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 01 13:03:08 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 01 13:03:08 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 01 13:03:08 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6237] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6248] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6631] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6632] audit: op="connection-add" uuid="576c0d87-205d-46e0-8925-225d5c4068f9" name="br-ex-br" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6644] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6646] audit: op="connection-add" uuid="7340bd2a-abdc-4ff8-9f99-ba0bb26a4521" name="br-ex-port" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6655] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6657] audit: op="connection-add" uuid="743cbaea-84dd-47fc-a646-eef99edaafb5" name="eth1-port" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6666] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6668] audit: op="connection-add" uuid="071ca334-5b58-407b-9724-7af69cb2805e" name="vlan20-port" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6676] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6678] audit: op="connection-add" uuid="9f1b328e-d7e5-43f4-8310-a21d774abf3f" name="vlan21-port" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6686] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6689] audit: op="connection-add" uuid="c99f998e-eb1d-43fd-8389-67258c6b002f" name="vlan22-port" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6697] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6698] audit: op="connection-add" uuid="311c32b7-17ce-4024-a271-1b159ae741ec" name="vlan23-port" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6714] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6726] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6728] audit: op="connection-add" uuid="40b1467f-e7fd-43dc-9b7f-ad129c590d00" name="br-ex-if" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6752] audit: op="connection-update" uuid="55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c" name="ci-private-network" args="connection.slave-type,connection.master,connection.port-type,connection.controller,connection.timestamp,ipv6.addresses,ipv6.method,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ovs-external-ids.data,ovs-interface.type,ipv4.addresses,ipv4.method,ipv4.routing-rules,ipv4.never-default,ipv4.dns,ipv4.routes" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6764] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6765] audit: op="connection-add" uuid="937e280a-d092-44bf-a162-4873dbffa638" name="vlan20-if" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6777] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6779] audit: op="connection-add" uuid="9b828b33-f4bb-4f80-9a32-10eb798ec1b4" name="vlan21-if" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6792] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6794] audit: op="connection-add" uuid="a931c5cd-1887-4874-909e-f77dd691887a" name="vlan22-if" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6806] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6807] audit: op="connection-add" uuid="fceb399c-689d-44a7-814e-0e134949fe2b" name="vlan23-if" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6816] audit: op="connection-delete" uuid="5676b0c3-8d77-3352-b8fd-5d58f5ca7d01" name="Wired connection 1" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6825] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6835] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6841] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (576c0d87-205d-46e0-8925-225d5c4068f9)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6842] audit: op="connection-activate" uuid="576c0d87-205d-46e0-8925-225d5c4068f9" name="br-ex-br" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6843] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6848] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6852] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (7340bd2a-abdc-4ff8-9f99-ba0bb26a4521)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6853] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6858] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6862] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (743cbaea-84dd-47fc-a646-eef99edaafb5)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6864] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6870] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6874] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (071ca334-5b58-407b-9724-7af69cb2805e)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6876] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6883] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6886] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (9f1b328e-d7e5-43f4-8310-a21d774abf3f)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6888] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6893] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6896] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c99f998e-eb1d-43fd-8389-67258c6b002f)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6898] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6903] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6906] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (311c32b7-17ce-4024-a271-1b159ae741ec)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6907] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6909] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6911] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6916] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6920] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6923] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (40b1467f-e7fd-43dc-9b7f-ad129c590d00)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6924] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6926] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6928] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6929] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6931] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6939] device (eth1): disconnecting for new activation request.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6940] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6942] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6944] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6945] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6947] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6951] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6955] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (937e280a-d092-44bf-a162-4873dbffa638)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6956] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6958] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6960] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6961] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6964] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6967] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6971] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (9b828b33-f4bb-4f80-9a32-10eb798ec1b4)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6972] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6974] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6976] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6978] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6980] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6984] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6987] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (a931c5cd-1887-4874-909e-f77dd691887a)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6989] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6991] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6993] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6994] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.6996] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7001] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7004] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (fceb399c-689d-44a7-814e-0e134949fe2b)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7005] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7008] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7010] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7012] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7013] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7022] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7024] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7027] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7029] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7035] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7038] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7041] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7044] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7046] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7050] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7054] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7058] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7059] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7063] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7066] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 systemd-udevd[48196]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7069] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7070] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7075] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 kernel: Timeout policy base is empty
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7079] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7082] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7084] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7089] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7092] dhcp4 (eth0): canceled DHCP transaction
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7092] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7092] dhcp4 (eth0): state changed no lease
Oct 01 13:03:09 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7093] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7103] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7107] audit: op="device-reapply" interface="eth1" ifindex=3 pid=48190 uid=0 result="fail" reason="Device is not activated"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7142] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7149] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7155] device (eth1): disconnecting for new activation request.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7156] audit: op="connection-activate" uuid="55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c" name="ci-private-network" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7157] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7161] dhcp4 (eth0): state changed new lease, address=38.102.83.245
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7164] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 01 13:03:09 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7227] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7234] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7346] device (eth1): Activation: starting connection 'ci-private-network' (55fba695-b9ed-5ba6-ac3c-7f0c4ae7e99c)
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7350] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7356] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7359] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 kernel: br-ex: entered promiscuous mode
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7367] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7371] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7375] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7376] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7377] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7379] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7380] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7381] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7393] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7398] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7400] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7403] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7405] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7408] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7411] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7413] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7415] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7418] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7420] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7422] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7425] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7429] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7434] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 kernel: vlan22: entered promiscuous mode
Oct 01 13:03:09 compute-0 systemd-udevd[48194]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 13:03:09 compute-0 kernel: vlan21: entered promiscuous mode
Oct 01 13:03:09 compute-0 systemd-udevd[48195]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7495] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7497] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7514] device (eth1): Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7523] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7533] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 kernel: vlan20: entered promiscuous mode
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7563] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7564] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7568] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 kernel: vlan23: entered promiscuous mode
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7613] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7618] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7633] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7638] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7680] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7681] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7682] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7686] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7690] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7702] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7716] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7719] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7738] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7746] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7792] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7793] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7794] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7801] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7808] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 01 13:03:09 compute-0 NetworkManager[45411]: <info>  [1759323789.7814] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 01 13:03:10 compute-0 sshd-session[48255]: Received disconnect from 156.236.31.46 port 43218:11: Bye Bye [preauth]
Oct 01 13:03:10 compute-0 sshd-session[48255]: Disconnected from authenticating user root 156.236.31.46 port 43218 [preauth]
Oct 01 13:03:10 compute-0 NetworkManager[45411]: <info>  [1759323790.9132] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.0998] checkpoint[0x55e97e400950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.1000] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=48190 uid=0 result="success"
Oct 01 13:03:11 compute-0 sudo[48550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muiuygjobzgawfezinsnomqnatrdejou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323790.7551537-295-241522590835225/AnsiballZ_async_status.py'
Oct 01 13:03:11 compute-0 sudo[48550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.3771] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48190 uid=0 result="success"
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.3779] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48190 uid=0 result="success"
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.5466] audit: op="networking-control" arg="global-dns-configuration" pid=48190 uid=0 result="success"
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.5497] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.5521] audit: op="networking-control" arg="global-dns-configuration" pid=48190 uid=0 result="success"
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.5685] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48190 uid=0 result="success"
Oct 01 13:03:11 compute-0 python3.9[48552]: ansible-ansible.legacy.async_status Invoked with jid=j566808765024.48184 mode=status _async_dir=/root/.ansible_async
Oct 01 13:03:11 compute-0 sudo[48550]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.7117] checkpoint[0x55e97e400a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 01 13:03:11 compute-0 NetworkManager[45411]: <info>  [1759323791.7121] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=48190 uid=0 result="success"
Oct 01 13:03:11 compute-0 ansible-async_wrapper.py[48188]: Module complete (48188)
Oct 01 13:03:12 compute-0 ansible-async_wrapper.py[48187]: Done in kid B.
Oct 01 13:03:13 compute-0 sshd-session[48558]: Invalid user phil from 27.254.137.144 port 33126
Oct 01 13:03:13 compute-0 sshd-session[48558]: Received disconnect from 27.254.137.144 port 33126:11: Bye Bye [preauth]
Oct 01 13:03:13 compute-0 sshd-session[48558]: Disconnected from invalid user phil 27.254.137.144 port 33126 [preauth]
Oct 01 13:03:14 compute-0 sudo[48657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-witvzcwccyegfrvmfntyxvqhvbgngakk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323790.7551537-295-241522590835225/AnsiballZ_async_status.py'
Oct 01 13:03:14 compute-0 sudo[48657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:15 compute-0 python3.9[48659]: ansible-ansible.legacy.async_status Invoked with jid=j566808765024.48184 mode=status _async_dir=/root/.ansible_async
Oct 01 13:03:15 compute-0 sudo[48657]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:15 compute-0 sudo[48756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbtpwyeprhaiqdwrcwjbzbvwutojvbqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323790.7551537-295-241522590835225/AnsiballZ_async_status.py'
Oct 01 13:03:15 compute-0 sudo[48756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:15 compute-0 python3.9[48758]: ansible-ansible.legacy.async_status Invoked with jid=j566808765024.48184 mode=cleanup _async_dir=/root/.ansible_async
Oct 01 13:03:15 compute-0 sudo[48756]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:16 compute-0 sudo[48908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgiquttgypgfdvlnyuomrciwfvkescdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323795.9362414-322-232223844582217/AnsiballZ_stat.py'
Oct 01 13:03:16 compute-0 sudo[48908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:16 compute-0 python3.9[48910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:03:16 compute-0 sudo[48908]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:16 compute-0 sudo[49031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llujuqstnrixdzalzbtnwswsyqdsprwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323795.9362414-322-232223844582217/AnsiballZ_copy.py'
Oct 01 13:03:16 compute-0 sudo[49031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:17 compute-0 python3.9[49033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323795.9362414-322-232223844582217/.source.returncode _original_basename=.5k934zip follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:03:17 compute-0 sudo[49031]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:17 compute-0 sudo[49183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztgwinbvdmvmpapnwfhmknladhdsyrsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323797.230515-338-126594682873439/AnsiballZ_stat.py'
Oct 01 13:03:17 compute-0 sudo[49183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:17 compute-0 python3.9[49185]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:03:17 compute-0 sudo[49183]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:18 compute-0 sudo[49306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjdzbojhgjgpgybihnnhpecklpydxkwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323797.230515-338-126594682873439/AnsiballZ_copy.py'
Oct 01 13:03:18 compute-0 sudo[49306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:18 compute-0 python3.9[49308]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323797.230515-338-126594682873439/.source.cfg _original_basename=.w8moerjk follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:03:18 compute-0 sudo[49306]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:18 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 01 13:03:18 compute-0 sudo[49462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciwfekhhbxfrdklkgygdoiptgxkuortu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323798.5589497-353-224573220860200/AnsiballZ_systemd.py'
Oct 01 13:03:18 compute-0 sudo[49462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:19 compute-0 python3.9[49464]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:03:19 compute-0 systemd[1]: Reloading Network Manager...
Oct 01 13:03:19 compute-0 NetworkManager[45411]: <info>  [1759323799.2860] audit: op="reload" arg="0" pid=49468 uid=0 result="success"
Oct 01 13:03:19 compute-0 NetworkManager[45411]: <info>  [1759323799.2866] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 01 13:03:19 compute-0 systemd[1]: Reloaded Network Manager.
Oct 01 13:03:19 compute-0 sudo[49462]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:19 compute-0 sshd-session[41408]: Connection closed by 192.168.122.30 port 34550
Oct 01 13:03:19 compute-0 sshd-session[41405]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:03:19 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Oct 01 13:03:19 compute-0 systemd[1]: session-9.scope: Consumed 50.047s CPU time.
Oct 01 13:03:19 compute-0 systemd-logind[818]: Session 9 logged out. Waiting for processes to exit.
Oct 01 13:03:19 compute-0 systemd-logind[818]: Removed session 9.
Oct 01 13:03:21 compute-0 sshd-session[49318]: Invalid user mana from 202.103.55.158 port 42122
Oct 01 13:03:22 compute-0 sshd-session[49318]: Received disconnect from 202.103.55.158 port 42122:11: Bye Bye [preauth]
Oct 01 13:03:22 compute-0 sshd-session[49318]: Disconnected from invalid user mana 202.103.55.158 port 42122 [preauth]
Oct 01 13:03:24 compute-0 sshd-session[49499]: Accepted publickey for zuul from 192.168.122.30 port 59078 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:03:24 compute-0 systemd-logind[818]: New session 10 of user zuul.
Oct 01 13:03:24 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 01 13:03:24 compute-0 sshd-session[49499]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:03:25 compute-0 python3.9[49653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:03:26 compute-0 python3.9[49807]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:03:28 compute-0 python3.9[50001]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:03:28 compute-0 sshd-session[49503]: Connection closed by 192.168.122.30 port 59078
Oct 01 13:03:28 compute-0 sshd-session[49499]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:03:28 compute-0 systemd-logind[818]: Session 10 logged out. Waiting for processes to exit.
Oct 01 13:03:28 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 01 13:03:28 compute-0 systemd[1]: session-10.scope: Consumed 2.530s CPU time.
Oct 01 13:03:28 compute-0 systemd-logind[818]: Removed session 10.
Oct 01 13:03:29 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 13:03:33 compute-0 sshd-session[50029]: Accepted publickey for zuul from 192.168.122.30 port 49648 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:03:33 compute-0 systemd-logind[818]: New session 11 of user zuul.
Oct 01 13:03:33 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 01 13:03:33 compute-0 sshd-session[50029]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:03:34 compute-0 python3.9[50183]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:03:35 compute-0 sshd-session[50184]: Received disconnect from 80.253.31.232 port 38106:11: Bye Bye [preauth]
Oct 01 13:03:35 compute-0 sshd-session[50184]: Disconnected from authenticating user root 80.253.31.232 port 38106 [preauth]
Oct 01 13:03:35 compute-0 python3.9[50339]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:03:36 compute-0 sudo[50493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frizmdzrextmapxklseygtewbsquvxwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323816.1713393-40-53248316300777/AnsiballZ_setup.py'
Oct 01 13:03:36 compute-0 sudo[50493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:36 compute-0 python3.9[50495]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:03:36 compute-0 sudo[50493]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:37 compute-0 sudo[50580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iftimnablmujyhnjyyyjpcqopwrzuqyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323816.1713393-40-53248316300777/AnsiballZ_dnf.py'
Oct 01 13:03:37 compute-0 sudo[50580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:37 compute-0 sshd-session[50504]: Invalid user test12 from 200.7.101.139 port 54710
Oct 01 13:03:37 compute-0 sshd-session[50504]: Received disconnect from 200.7.101.139 port 54710:11: Bye Bye [preauth]
Oct 01 13:03:37 compute-0 sshd-session[50504]: Disconnected from invalid user test12 200.7.101.139 port 54710 [preauth]
Oct 01 13:03:37 compute-0 python3.9[50582]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:03:38 compute-0 sudo[50580]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:39 compute-0 sudo[50733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhgckzvvsxiezdaiwbmzcsfsiazpoeko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323818.9635193-52-27926071753370/AnsiballZ_setup.py'
Oct 01 13:03:39 compute-0 sudo[50733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:39 compute-0 sshd[1010]: Timeout before authentication for connection from 202.103.55.158 to 38.102.83.245, pid = 33710
Oct 01 13:03:39 compute-0 python3.9[50735]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:03:39 compute-0 sudo[50733]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:40 compute-0 sudo[50929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiiskkmwomhmqkwobvyfrwtsfukywrjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323820.1763508-63-78814831119837/AnsiballZ_file.py'
Oct 01 13:03:40 compute-0 sudo[50929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:40 compute-0 python3.9[50931]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:03:40 compute-0 sudo[50929]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:41 compute-0 sudo[51081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbkonwmjdqgapguycovvdlaoxlfbmknn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323820.933869-71-258200235031446/AnsiballZ_command.py'
Oct 01 13:03:41 compute-0 sudo[51081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:41 compute-0 python3.9[51083]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1757742756-merged.mount: Deactivated successfully.
Oct 01 13:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1787590415-merged.mount: Deactivated successfully.
Oct 01 13:03:41 compute-0 podman[51084]: 2025-10-01 13:03:41.645788654 +0000 UTC m=+0.057592824 system refresh
Oct 01 13:03:41 compute-0 sudo[51081]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:42 compute-0 sudo[51244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjzzxnwmonwjxzuhccnpwvtlmddsaytn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323821.8756905-79-96361769066963/AnsiballZ_stat.py'
Oct 01 13:03:42 compute-0 sudo[51244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:42 compute-0 python3.9[51246]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:03:42 compute-0 sudo[51244]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:03:43 compute-0 sudo[51367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bydhewgfnffwbxnzzfvbmpahcvwubfhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323821.8756905-79-96361769066963/AnsiballZ_copy.py'
Oct 01 13:03:43 compute-0 sudo[51367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:43 compute-0 python3.9[51369]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323821.8756905-79-96361769066963/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ccae831033b5b85a94db60a554cc1970129a9c74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:03:43 compute-0 sudo[51367]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:43 compute-0 sudo[51519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rotyjpcmnjspyibpqnazgxsbeauilsfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323823.4099803-94-54161465617170/AnsiballZ_stat.py'
Oct 01 13:03:43 compute-0 sudo[51519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:43 compute-0 python3.9[51521]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:03:43 compute-0 sudo[51519]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:44 compute-0 sudo[51642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhqoizazsfpkfvdovnqlguppgxqngstu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323823.4099803-94-54161465617170/AnsiballZ_copy.py'
Oct 01 13:03:44 compute-0 sudo[51642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:44 compute-0 python3.9[51644]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323823.4099803-94-54161465617170/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c2a85b7389d30a5066b1ae0058c9a8ae1bc25688 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:03:44 compute-0 sudo[51642]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:45 compute-0 sudo[51794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uahvtcpyzaxvdvwpvrgnfvdmtpoycneb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323824.6862926-110-188445624527723/AnsiballZ_ini_file.py'
Oct 01 13:03:45 compute-0 sudo[51794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:45 compute-0 python3.9[51796]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:03:45 compute-0 sudo[51794]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:45 compute-0 sudo[51946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvyvgafqyleakpxshwtxksaaksdbchhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323825.5365472-110-184249387787160/AnsiballZ_ini_file.py'
Oct 01 13:03:45 compute-0 sudo[51946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:45 compute-0 python3.9[51948]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:03:45 compute-0 sudo[51946]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:46 compute-0 sudo[52098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwutqvbhzcugijapgziqlasonblngvny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323826.1300411-110-121873009222217/AnsiballZ_ini_file.py'
Oct 01 13:03:46 compute-0 sudo[52098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:46 compute-0 python3.9[52100]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:03:46 compute-0 sudo[52098]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:46 compute-0 sudo[52250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wstbbwqyfdnxjgkubjkrppfrjxmchuqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323826.66415-110-193180079132879/AnsiballZ_ini_file.py'
Oct 01 13:03:46 compute-0 sudo[52250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:47 compute-0 python3.9[52252]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:03:47 compute-0 sudo[52250]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:47 compute-0 sudo[52402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimbdocqrvawjesvrlxhzrisiyqavvvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323827.3491578-141-261226487339602/AnsiballZ_dnf.py'
Oct 01 13:03:47 compute-0 sudo[52402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:47 compute-0 python3.9[52404]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:03:48 compute-0 sudo[52402]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:49 compute-0 sudo[52555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsutbhiywjicuggrakxzvlcdgchiecky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323829.2767282-152-109125398387102/AnsiballZ_setup.py'
Oct 01 13:03:49 compute-0 sudo[52555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:49 compute-0 python3.9[52557]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:03:49 compute-0 sudo[52555]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:50 compute-0 sudo[52709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcijkmgmgtwdbdekjlrxzxfogkhzgsqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323830.0063717-160-229009669343930/AnsiballZ_stat.py'
Oct 01 13:03:50 compute-0 sudo[52709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:50 compute-0 python3.9[52711]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:03:50 compute-0 sudo[52709]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:50 compute-0 sudo[52861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvvpamiclmflddzjfdocclrnjazdtobw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323830.6909442-169-115667474705775/AnsiballZ_stat.py'
Oct 01 13:03:50 compute-0 sudo[52861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:51 compute-0 python3.9[52863]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:03:51 compute-0 sudo[52861]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:51 compute-0 sudo[53013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gajzvmaleruurcwkiseucdbvtdbgctde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323831.4205008-179-118331274146316/AnsiballZ_service_facts.py'
Oct 01 13:03:51 compute-0 sudo[53013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:51 compute-0 python3.9[53015]: ansible-service_facts Invoked
Oct 01 13:03:52 compute-0 network[53032]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:03:52 compute-0 network[53033]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:03:52 compute-0 network[53034]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:03:53 compute-0 sshd[1010]: drop connection #0 from [202.103.55.158]:49064 on [38.102.83.245]:22 penalty: exceeded LoginGraceTime
Oct 01 13:03:54 compute-0 sudo[53013]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:55 compute-0 sudo[53319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayhpincbsinuyahpukorpbxluvejutlx ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759323835.3935716-192-252094908766271/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759323835.3935716-192-252094908766271/args'
Oct 01 13:03:55 compute-0 sudo[53319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:55 compute-0 sudo[53319]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:56 compute-0 sudo[53486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcflzgssyrkkyxwycpqrrrvwtvwtdymb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323835.999714-203-78544826283859/AnsiballZ_dnf.py'
Oct 01 13:03:56 compute-0 sudo[53486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:56 compute-0 python3.9[53488]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:03:57 compute-0 sudo[53486]: pam_unix(sudo:session): session closed for user root
Oct 01 13:03:58 compute-0 sudo[53639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbdvyheklkqavtqpssxyxswdilbcwgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323837.9936187-216-229038833448572/AnsiballZ_package_facts.py'
Oct 01 13:03:58 compute-0 sudo[53639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:03:58 compute-0 python3.9[53641]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 01 13:03:59 compute-0 sudo[53639]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:00 compute-0 sudo[53791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxhdawgszeqwsawchiijrbpenkqewvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323839.5770307-226-159997058782265/AnsiballZ_stat.py'
Oct 01 13:04:00 compute-0 sudo[53791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:00 compute-0 python3.9[53793]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:00 compute-0 sudo[53791]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:00 compute-0 sudo[53916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhstonktpvzytkxkdnwmobtpyppdjjfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323839.5770307-226-159997058782265/AnsiballZ_copy.py'
Oct 01 13:04:00 compute-0 sudo[53916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:00 compute-0 python3.9[53918]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323839.5770307-226-159997058782265/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:00 compute-0 sudo[53916]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:01 compute-0 sudo[54070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khpekdexonltzkpwhrnqwiwoymolcnxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323840.9632306-241-276972557380664/AnsiballZ_stat.py'
Oct 01 13:04:01 compute-0 sudo[54070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:01 compute-0 python3.9[54072]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:01 compute-0 sudo[54070]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:01 compute-0 sudo[54195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnyrdzfnqwcrjzlupbrnhlkvswzdhpwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323840.9632306-241-276972557380664/AnsiballZ_copy.py'
Oct 01 13:04:01 compute-0 sudo[54195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:02 compute-0 python3.9[54197]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323840.9632306-241-276972557380664/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:02 compute-0 sudo[54195]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:02 compute-0 sudo[54349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snoupawipcifhqatoohwjhdpqnnmgjni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323842.4869008-262-234848532448732/AnsiballZ_lineinfile.py'
Oct 01 13:04:02 compute-0 sudo[54349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:03 compute-0 python3.9[54351]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:03 compute-0 sudo[54349]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:03 compute-0 sudo[54503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fademqmpwkrmywmguuzebwvxdjaeescd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323843.5156302-277-141827042199212/AnsiballZ_setup.py'
Oct 01 13:04:03 compute-0 sudo[54503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:04 compute-0 python3.9[54505]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:04:04 compute-0 sudo[54503]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:05 compute-0 sudo[54587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyowponeowtklkcxaliizlybrihcshtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323843.5156302-277-141827042199212/AnsiballZ_systemd.py'
Oct 01 13:04:05 compute-0 sudo[54587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:05 compute-0 python3.9[54589]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:04:05 compute-0 sudo[54587]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:06 compute-0 sudo[54741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgawvuosanagkbfanlbrxxlnskmibops ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323845.8479116-293-242140989005446/AnsiballZ_setup.py'
Oct 01 13:04:06 compute-0 sudo[54741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:06 compute-0 python3.9[54743]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:04:06 compute-0 sudo[54741]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:07 compute-0 sudo[54825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syrjtsfqpfrvzkvgocqnoobajnbarerv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323845.8479116-293-242140989005446/AnsiballZ_systemd.py'
Oct 01 13:04:07 compute-0 sudo[54825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:07 compute-0 python3.9[54827]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:04:07 compute-0 chronyd[828]: chronyd exiting
Oct 01 13:04:07 compute-0 systemd[1]: Stopping NTP client/server...
Oct 01 13:04:07 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Oct 01 13:04:07 compute-0 systemd[1]: Stopped NTP client/server.
Oct 01 13:04:07 compute-0 systemd[1]: Starting NTP client/server...
Oct 01 13:04:07 compute-0 chronyd[54836]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 01 13:04:07 compute-0 chronyd[54836]: Frequency -32.086 +/- 0.168 ppm read from /var/lib/chrony/drift
Oct 01 13:04:07 compute-0 chronyd[54836]: Loaded seccomp filter (level 2)
Oct 01 13:04:07 compute-0 systemd[1]: Started NTP client/server.
Oct 01 13:04:07 compute-0 sudo[54825]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:07 compute-0 sshd-session[50032]: Connection closed by 192.168.122.30 port 49648
Oct 01 13:04:07 compute-0 sshd-session[50029]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:04:07 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 01 13:04:07 compute-0 systemd[1]: session-11.scope: Consumed 25.140s CPU time.
Oct 01 13:04:07 compute-0 systemd-logind[818]: Session 11 logged out. Waiting for processes to exit.
Oct 01 13:04:07 compute-0 systemd-logind[818]: Removed session 11.
Oct 01 13:04:13 compute-0 sshd-session[54863]: Invalid user joy from 156.236.31.46 port 43302
Oct 01 13:04:13 compute-0 sshd-session[54863]: Received disconnect from 156.236.31.46 port 43302:11: Bye Bye [preauth]
Oct 01 13:04:13 compute-0 sshd-session[54863]: Disconnected from invalid user joy 156.236.31.46 port 43302 [preauth]
Oct 01 13:04:13 compute-0 sshd-session[54865]: Accepted publickey for zuul from 192.168.122.30 port 57024 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:04:13 compute-0 systemd-logind[818]: New session 12 of user zuul.
Oct 01 13:04:13 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 01 13:04:13 compute-0 sshd-session[54865]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:04:14 compute-0 sudo[55018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixcqmbinngkndebmnjzffqgslhbwbeff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323853.8630998-22-112869852946984/AnsiballZ_file.py'
Oct 01 13:04:14 compute-0 sudo[55018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:14 compute-0 python3.9[55020]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:14 compute-0 sudo[55018]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:15 compute-0 sudo[55170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlcdtlnmvrzlmageqpofjigtrwlirxar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323854.9604046-34-61455332101926/AnsiballZ_stat.py'
Oct 01 13:04:15 compute-0 sudo[55170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:15 compute-0 python3.9[55172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:15 compute-0 sudo[55170]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:16 compute-0 sudo[55293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqqqhinsshyzzngzsmtjgvpkselmmxrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323854.9604046-34-61455332101926/AnsiballZ_copy.py'
Oct 01 13:04:16 compute-0 sudo[55293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:16 compute-0 python3.9[55295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323854.9604046-34-61455332101926/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:16 compute-0 sudo[55293]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:16 compute-0 sshd-session[54868]: Connection closed by 192.168.122.30 port 57024
Oct 01 13:04:16 compute-0 sshd-session[54865]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:04:16 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 01 13:04:16 compute-0 systemd[1]: session-12.scope: Consumed 1.827s CPU time.
Oct 01 13:04:16 compute-0 systemd-logind[818]: Session 12 logged out. Waiting for processes to exit.
Oct 01 13:04:16 compute-0 systemd-logind[818]: Removed session 12.
Oct 01 13:04:22 compute-0 sshd-session[55320]: Accepted publickey for zuul from 192.168.122.30 port 37614 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:04:22 compute-0 systemd-logind[818]: New session 13 of user zuul.
Oct 01 13:04:22 compute-0 systemd[1]: Started Session 13 of User zuul.
Oct 01 13:04:22 compute-0 sshd-session[55320]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:04:23 compute-0 python3.9[55475]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:04:24 compute-0 sshd-session[55451]: Invalid user seekcy from 27.254.137.144 port 56962
Oct 01 13:04:24 compute-0 sudo[55629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skncissijmwgieznykgrumwrtpkofnhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323863.6508133-33-272233175759834/AnsiballZ_file.py'
Oct 01 13:04:24 compute-0 sudo[55629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:24 compute-0 python3.9[55631]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:24 compute-0 sudo[55629]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:24 compute-0 sshd-session[55451]: Received disconnect from 27.254.137.144 port 56962:11: Bye Bye [preauth]
Oct 01 13:04:24 compute-0 sshd-session[55451]: Disconnected from invalid user seekcy 27.254.137.144 port 56962 [preauth]
Oct 01 13:04:25 compute-0 sudo[55804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eptkwvunvtssbwzmoaiphiixtnkzlpgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323864.6009007-41-119665862162150/AnsiballZ_stat.py'
Oct 01 13:04:25 compute-0 sudo[55804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:25 compute-0 python3.9[55806]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:25 compute-0 sudo[55804]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:25 compute-0 sudo[55927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pffqtnekacrdjcvhsfgacqadqqqtbebn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323864.6009007-41-119665862162150/AnsiballZ_copy.py'
Oct 01 13:04:25 compute-0 sudo[55927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:26 compute-0 python3.9[55929]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759323864.6009007-41-119665862162150/.source.json _original_basename=.7syhzya4 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:26 compute-0 sudo[55927]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:26 compute-0 sudo[56079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzsocxwwahsmuyxlccylatqfmiaznnbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323866.3910127-64-145600960011233/AnsiballZ_stat.py'
Oct 01 13:04:26 compute-0 sudo[56079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:26 compute-0 python3.9[56081]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:26 compute-0 sudo[56079]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:27 compute-0 sudo[56202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cajxwrrujsnfueevkebsyimiafjdeago ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323866.3910127-64-145600960011233/AnsiballZ_copy.py'
Oct 01 13:04:27 compute-0 sudo[56202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:27 compute-0 python3.9[56204]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323866.3910127-64-145600960011233/.source _original_basename=.jtwhrk_o follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:27 compute-0 sudo[56202]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:27 compute-0 sudo[56354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szebhaljdevtwlsypewhlxxrkryccijq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323867.5301278-80-174473125159575/AnsiballZ_file.py'
Oct 01 13:04:27 compute-0 sudo[56354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:28 compute-0 python3.9[56356]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:04:28 compute-0 sudo[56354]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:28 compute-0 sudo[56506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpsbetvmdtydthgqxqmntunovygoaqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323868.19163-88-199167718369256/AnsiballZ_stat.py'
Oct 01 13:04:28 compute-0 sudo[56506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:28 compute-0 python3.9[56508]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:28 compute-0 sudo[56506]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:29 compute-0 sudo[56629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvgcndgnhfdxahguzshgkqeqmogxkokj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323868.19163-88-199167718369256/AnsiballZ_copy.py'
Oct 01 13:04:29 compute-0 sudo[56629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:29 compute-0 python3.9[56631]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323868.19163-88-199167718369256/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:04:29 compute-0 sudo[56629]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:29 compute-0 sudo[56783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opdtesqyrjllozkmbemkgcfsyrypazmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323869.4385297-88-245721431231270/AnsiballZ_stat.py'
Oct 01 13:04:29 compute-0 sudo[56783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:29 compute-0 python3.9[56785]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:29 compute-0 sudo[56783]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:30 compute-0 sudo[56906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqnvcpeirmyydkiojasfdssocaofkube ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323869.4385297-88-245721431231270/AnsiballZ_copy.py'
Oct 01 13:04:30 compute-0 sudo[56906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:30 compute-0 sshd-session[56679]: Received disconnect from 80.253.31.232 port 43638:11: Bye Bye [preauth]
Oct 01 13:04:30 compute-0 sshd-session[56679]: Disconnected from authenticating user root 80.253.31.232 port 43638 [preauth]
Oct 01 13:04:30 compute-0 python3.9[56908]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759323869.4385297-88-245721431231270/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:04:30 compute-0 sudo[56906]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:30 compute-0 sudo[57058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odltnzgxnvsedhgongoycfgglhwkruin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323870.683426-117-268459844038779/AnsiballZ_file.py'
Oct 01 13:04:30 compute-0 sudo[57058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:31 compute-0 python3.9[57060]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:31 compute-0 sudo[57058]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:31 compute-0 sudo[57210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqpkefadafmqyctinwuzgwdmamrpvkqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323871.4139986-125-144503824893761/AnsiballZ_stat.py'
Oct 01 13:04:31 compute-0 sudo[57210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:31 compute-0 python3.9[57212]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:31 compute-0 sudo[57210]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:32 compute-0 sudo[57333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwyetmeusqynfkkajfrlbjenfziqaxtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323871.4139986-125-144503824893761/AnsiballZ_copy.py'
Oct 01 13:04:32 compute-0 sudo[57333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:32 compute-0 python3.9[57335]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323871.4139986-125-144503824893761/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:32 compute-0 sudo[57333]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:33 compute-0 sudo[57485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdxletdiqosfjlnjxoxsbrddglugrvcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323872.778861-140-150938901457922/AnsiballZ_stat.py'
Oct 01 13:04:33 compute-0 sudo[57485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:33 compute-0 python3.9[57487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:33 compute-0 sudo[57485]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:33 compute-0 sudo[57608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-expnivwcozmgsvjdmdzrgjzqzsecmpdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323872.778861-140-150938901457922/AnsiballZ_copy.py'
Oct 01 13:04:33 compute-0 sudo[57608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:33 compute-0 python3.9[57610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323872.778861-140-150938901457922/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:33 compute-0 sudo[57608]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:34 compute-0 sudo[57760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqahkxddksybcvetmbomobbnsikgfnqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323874.1307564-155-209952567255842/AnsiballZ_systemd.py'
Oct 01 13:04:34 compute-0 sudo[57760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:35 compute-0 python3.9[57762]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:04:35 compute-0 systemd[1]: Reloading.
Oct 01 13:04:35 compute-0 systemd-rc-local-generator[57790]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:04:35 compute-0 systemd-sysv-generator[57794]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:04:35 compute-0 systemd[1]: Reloading.
Oct 01 13:04:35 compute-0 systemd-rc-local-generator[57827]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:04:35 compute-0 systemd-sysv-generator[57830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:04:35 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 01 13:04:35 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 01 13:04:35 compute-0 sudo[57760]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:36 compute-0 sudo[57988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkgcgfhthtdijvsckpsltamjbllrnqso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323875.8936465-163-215856820675543/AnsiballZ_stat.py'
Oct 01 13:04:36 compute-0 sudo[57988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:36 compute-0 python3.9[57990]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:36 compute-0 sudo[57988]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:36 compute-0 sudo[58111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfxqprcyucdfyzvvfzjsprrmgglwlnwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323875.8936465-163-215856820675543/AnsiballZ_copy.py'
Oct 01 13:04:36 compute-0 sudo[58111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:36 compute-0 python3.9[58113]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323875.8936465-163-215856820675543/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:37 compute-0 sudo[58111]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:37 compute-0 sudo[58263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehmyipkrspqqbxgryjmbjxgziwqxbztu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323877.1891577-178-124389488367802/AnsiballZ_stat.py'
Oct 01 13:04:37 compute-0 sudo[58263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:37 compute-0 python3.9[58265]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:37 compute-0 sudo[58263]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:38 compute-0 sudo[58386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dopdswnyrucsohfhgketiqztoeddowem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323877.1891577-178-124389488367802/AnsiballZ_copy.py'
Oct 01 13:04:38 compute-0 sudo[58386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:38 compute-0 python3.9[58388]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323877.1891577-178-124389488367802/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:38 compute-0 sudo[58386]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:38 compute-0 sudo[58538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvhauzlbsyjmkhmelsptkslikbcpqvyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323878.4111328-193-199166758113139/AnsiballZ_systemd.py'
Oct 01 13:04:38 compute-0 sudo[58538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:39 compute-0 python3.9[58540]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:04:39 compute-0 systemd[1]: Reloading.
Oct 01 13:04:39 compute-0 systemd-rc-local-generator[58571]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:04:39 compute-0 systemd-sysv-generator[58574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:04:39 compute-0 systemd[1]: Reloading.
Oct 01 13:04:39 compute-0 systemd-rc-local-generator[58606]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:04:39 compute-0 systemd-sysv-generator[58611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:04:39 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 13:04:39 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 13:04:39 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 13:04:39 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 13:04:39 compute-0 sudo[58538]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:40 compute-0 python3.9[58766]: ansible-ansible.builtin.service_facts Invoked
Oct 01 13:04:40 compute-0 network[58783]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:04:40 compute-0 network[58784]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:04:40 compute-0 network[58785]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:04:45 compute-0 sudo[59047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcjopanhvepfgvjrdjrmcqrjduegrhic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323885.3358078-209-198620717089399/AnsiballZ_systemd.py'
Oct 01 13:04:45 compute-0 sudo[59047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:46 compute-0 python3.9[59049]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:04:46 compute-0 systemd[1]: Reloading.
Oct 01 13:04:46 compute-0 systemd-sysv-generator[59079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:04:46 compute-0 systemd-rc-local-generator[59074]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:04:46 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 01 13:04:46 compute-0 sshd-session[59050]: Received disconnect from 200.7.101.139 port 36510:11: Bye Bye [preauth]
Oct 01 13:04:46 compute-0 sshd-session[59050]: Disconnected from authenticating user root 200.7.101.139 port 36510 [preauth]
Oct 01 13:04:46 compute-0 iptables.init[59091]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 01 13:04:46 compute-0 iptables.init[59091]: iptables: Flushing firewall rules: [  OK  ]
Oct 01 13:04:46 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Oct 01 13:04:46 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 01 13:04:46 compute-0 sudo[59047]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:47 compute-0 sudo[59285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiamgncgqzojxyngwhnspjescqogeexb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323886.9010448-209-165604786124252/AnsiballZ_systemd.py'
Oct 01 13:04:47 compute-0 sudo[59285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:47 compute-0 python3.9[59287]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:04:47 compute-0 sudo[59285]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:48 compute-0 sudo[59439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdowptfbrptcbezutnulhylhjihtusos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323887.9197133-225-148011585344036/AnsiballZ_systemd.py'
Oct 01 13:04:48 compute-0 sudo[59439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:48 compute-0 python3.9[59441]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:04:48 compute-0 systemd[1]: Reloading.
Oct 01 13:04:48 compute-0 systemd-sysv-generator[59474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:04:48 compute-0 systemd-rc-local-generator[59470]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:04:48 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 01 13:04:48 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 01 13:04:48 compute-0 sudo[59439]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:49 compute-0 sudo[59631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlvartsqkojxpztqjykljartdqdrvunz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323889.145789-233-96745512425700/AnsiballZ_command.py'
Oct 01 13:04:49 compute-0 sudo[59631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:49 compute-0 python3.9[59633]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:04:49 compute-0 sudo[59631]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:50 compute-0 sudo[59784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rotnalzqytulhssnnqimyqthhgtxbxkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323890.338363-247-271821978966535/AnsiballZ_stat.py'
Oct 01 13:04:50 compute-0 sudo[59784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:50 compute-0 python3.9[59786]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:04:50 compute-0 sudo[59784]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:51 compute-0 sudo[59909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmpmhcblkglrlfqlvggiouuayfohayhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323890.338363-247-271821978966535/AnsiballZ_copy.py'
Oct 01 13:04:51 compute-0 sudo[59909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:04:51 compute-0 python3.9[59911]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323890.338363-247-271821978966535/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:04:51 compute-0 sudo[59909]: pam_unix(sudo:session): session closed for user root
Oct 01 13:04:52 compute-0 python3.9[60062]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:04:52 compute-0 polkitd[6665]: Registered Authentication Agent for unix-process:60064:645316 (system bus name :1.523 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 01 13:05:17 compute-0 polkit-agent-helper-1[60076]: pam_unix(polkit-1:auth): conversation failed
Oct 01 13:05:17 compute-0 polkit-agent-helper-1[60076]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 01 13:05:17 compute-0 polkitd[6665]: Unregistered Authentication Agent for unix-process:60064:645316 (system bus name :1.523, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 01 13:05:17 compute-0 polkitd[6665]: Operator of unix-process:60064:645316 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.522 [<unknown>] (owned by unix-user:zuul)
Oct 01 13:05:17 compute-0 sshd-session[55323]: Connection closed by 192.168.122.30 port 37614
Oct 01 13:05:17 compute-0 sshd-session[55320]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:05:17 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 01 13:05:17 compute-0 systemd[1]: session-13.scope: Consumed 20.141s CPU time.
Oct 01 13:05:17 compute-0 systemd-logind[818]: Session 13 logged out. Waiting for processes to exit.
Oct 01 13:05:17 compute-0 systemd-logind[818]: Removed session 13.
Oct 01 13:05:18 compute-0 sshd-session[60102]: Invalid user daniil from 156.236.31.46 port 43394
Oct 01 13:05:18 compute-0 sshd-session[60102]: Received disconnect from 156.236.31.46 port 43394:11: Bye Bye [preauth]
Oct 01 13:05:18 compute-0 sshd-session[60102]: Disconnected from invalid user daniil 156.236.31.46 port 43394 [preauth]
Oct 01 13:05:28 compute-0 sshd-session[60104]: Invalid user sharepoint from 80.253.31.232 port 33062
Oct 01 13:05:28 compute-0 sshd-session[60104]: Received disconnect from 80.253.31.232 port 33062:11: Bye Bye [preauth]
Oct 01 13:05:28 compute-0 sshd-session[60104]: Disconnected from invalid user sharepoint 80.253.31.232 port 33062 [preauth]
Oct 01 13:05:30 compute-0 sshd-session[60106]: Accepted publickey for zuul from 192.168.122.30 port 40490 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:05:30 compute-0 systemd-logind[818]: New session 14 of user zuul.
Oct 01 13:05:30 compute-0 systemd[1]: Started Session 14 of User zuul.
Oct 01 13:05:30 compute-0 sshd-session[60106]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:05:31 compute-0 python3.9[60259]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:05:32 compute-0 sudo[60413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuwxtvdxvnqxxympnoagphgycvrrhunm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323931.8205183-33-186345936487421/AnsiballZ_file.py'
Oct 01 13:05:32 compute-0 sudo[60413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:32 compute-0 python3.9[60415]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:32 compute-0 sudo[60413]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:33 compute-0 sudo[60588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etcxwmqirgucqwxyyxocuejzgkhbnpni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323932.6414375-41-188489877574633/AnsiballZ_stat.py'
Oct 01 13:05:33 compute-0 sudo[60588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:33 compute-0 python3.9[60590]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:33 compute-0 sudo[60588]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:33 compute-0 sudo[60668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nritvzgounaiaawihsvivsiegyyxmzcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323932.6414375-41-188489877574633/AnsiballZ_file.py'
Oct 01 13:05:33 compute-0 sudo[60668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:33 compute-0 python3.9[60670]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.sjfer7ai recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:33 compute-0 sudo[60668]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:34 compute-0 sudo[60820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egmwaiusubacnwuzmqzahtkrgctqyhpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323934.209137-61-176536605228878/AnsiballZ_stat.py'
Oct 01 13:05:34 compute-0 sudo[60820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:34 compute-0 python3.9[60822]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:34 compute-0 sudo[60820]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:34 compute-0 sshd-session[60614]: Invalid user seekcy from 27.254.137.144 port 52560
Oct 01 13:05:35 compute-0 sudo[60898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opuvuovovcaqscxksijcgxmelmyraeac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323934.209137-61-176536605228878/AnsiballZ_file.py'
Oct 01 13:05:35 compute-0 sudo[60898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:35 compute-0 sshd-session[60614]: Received disconnect from 27.254.137.144 port 52560:11: Bye Bye [preauth]
Oct 01 13:05:35 compute-0 sshd-session[60614]: Disconnected from invalid user seekcy 27.254.137.144 port 52560 [preauth]
Oct 01 13:05:35 compute-0 python3.9[60900]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.zjkyyie6 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:35 compute-0 sudo[60898]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:35 compute-0 sudo[61050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbmqtiouedprgrjigqmkzkbqmklhbwsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323935.4075496-74-229139768448214/AnsiballZ_file.py'
Oct 01 13:05:35 compute-0 sudo[61050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:35 compute-0 python3.9[61052]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:05:35 compute-0 sudo[61050]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:36 compute-0 sudo[61202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpkuenujkwtqhcsoulcqxsenvpuxpcnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323936.0456624-82-57880354662437/AnsiballZ_stat.py'
Oct 01 13:05:36 compute-0 sudo[61202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:36 compute-0 python3.9[61204]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:36 compute-0 sudo[61202]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:36 compute-0 sudo[61280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thgcrxunsrziimhakcqqckofuqhcqtlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323936.0456624-82-57880354662437/AnsiballZ_file.py'
Oct 01 13:05:36 compute-0 sudo[61280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:37 compute-0 python3.9[61282]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:05:37 compute-0 sudo[61280]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:37 compute-0 sudo[61432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxantzyfltlqjxaqklwrwgqacnpywmif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323937.197639-82-266587420993446/AnsiballZ_stat.py'
Oct 01 13:05:37 compute-0 sudo[61432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:37 compute-0 python3.9[61434]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:37 compute-0 sudo[61432]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:38 compute-0 sudo[61510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdlgrrkprcsffjqrzpfxljctdwdtlwfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323937.197639-82-266587420993446/AnsiballZ_file.py'
Oct 01 13:05:38 compute-0 sudo[61510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:38 compute-0 python3.9[61512]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:05:38 compute-0 sudo[61510]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:38 compute-0 sudo[61662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xglcekhdmzsqunphivrnyfiqjvozsrhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323938.4994843-105-253865824293146/AnsiballZ_file.py'
Oct 01 13:05:38 compute-0 sudo[61662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:39 compute-0 python3.9[61664]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:39 compute-0 sudo[61662]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:39 compute-0 sudo[61814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrvfmnmuqfhiddcffcfcnpbfjtftyfrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323939.2613811-113-78420680059026/AnsiballZ_stat.py'
Oct 01 13:05:39 compute-0 sudo[61814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:39 compute-0 python3.9[61816]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:39 compute-0 sudo[61814]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:40 compute-0 sudo[61892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drfhutzdzclopvmuaiejezftoigpalam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323939.2613811-113-78420680059026/AnsiballZ_file.py'
Oct 01 13:05:40 compute-0 sudo[61892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:40 compute-0 python3.9[61894]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:40 compute-0 sudo[61892]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:40 compute-0 sudo[62044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdnpebvijzbikixtclxwrndeciybawma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323940.4875216-125-204044226191513/AnsiballZ_stat.py'
Oct 01 13:05:40 compute-0 sudo[62044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:40 compute-0 python3.9[62046]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:41 compute-0 sudo[62044]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:41 compute-0 sudo[62122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pepodcejmrysrmugqjitwihmuqhyhbzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323940.4875216-125-204044226191513/AnsiballZ_file.py'
Oct 01 13:05:41 compute-0 sudo[62122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:41 compute-0 python3.9[62124]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:41 compute-0 sudo[62122]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:42 compute-0 sudo[62274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhlinobciqgblgzpkruczvkzkfmeqvoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323941.706757-137-13692836661442/AnsiballZ_systemd.py'
Oct 01 13:05:42 compute-0 sudo[62274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:42 compute-0 python3.9[62276]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:05:42 compute-0 systemd[1]: Reloading.
Oct 01 13:05:42 compute-0 systemd-sysv-generator[62306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:05:42 compute-0 systemd-rc-local-generator[62303]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:05:43 compute-0 sudo[62274]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:44 compute-0 sudo[62462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhgzgpjwpvmtjruknfuyxfaheihooqjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323944.1260505-145-61800479427665/AnsiballZ_stat.py'
Oct 01 13:05:44 compute-0 sudo[62462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:44 compute-0 python3.9[62464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:44 compute-0 sudo[62462]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:44 compute-0 sudo[62540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mghrpgmuuhlneckjqsmttlmfctvkampk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323944.1260505-145-61800479427665/AnsiballZ_file.py'
Oct 01 13:05:44 compute-0 sudo[62540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:45 compute-0 python3.9[62542]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:45 compute-0 sudo[62540]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:45 compute-0 sudo[62692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clhtwufupdblzannvmgnzqcszaefsdeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323945.331729-157-127650496936860/AnsiballZ_stat.py'
Oct 01 13:05:45 compute-0 sudo[62692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:45 compute-0 python3.9[62694]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:45 compute-0 sudo[62692]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:46 compute-0 sudo[62770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csvmguuissaxmjqcgxotbtkzxgdjdppq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323945.331729-157-127650496936860/AnsiballZ_file.py'
Oct 01 13:05:46 compute-0 sudo[62770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:46 compute-0 python3.9[62772]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:46 compute-0 sudo[62770]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:46 compute-0 sudo[62922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tebtednpujlgobzhdayroungrcglfmsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323946.398595-169-155375977293853/AnsiballZ_systemd.py'
Oct 01 13:05:46 compute-0 sudo[62922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:46 compute-0 python3.9[62924]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:05:46 compute-0 systemd[1]: Reloading.
Oct 01 13:05:47 compute-0 systemd-rc-local-generator[62947]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:05:47 compute-0 systemd-sysv-generator[62950]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:05:47 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 13:05:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 13:05:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 13:05:47 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 13:05:47 compute-0 sudo[62922]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:48 compute-0 python3.9[63114]: ansible-ansible.builtin.service_facts Invoked
Oct 01 13:05:48 compute-0 network[63131]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:05:48 compute-0 network[63132]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:05:48 compute-0 network[63133]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:05:53 compute-0 sudo[63394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bptmkbnkcuqpicwgbjkdjmzjrzkjiqag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323953.0134513-195-33955011206332/AnsiballZ_stat.py'
Oct 01 13:05:53 compute-0 sudo[63394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:53 compute-0 python3.9[63396]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:53 compute-0 sudo[63394]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:53 compute-0 sudo[63472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwowhlxqbmmyfrzeqwomlvftthjionil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323953.0134513-195-33955011206332/AnsiballZ_file.py'
Oct 01 13:05:54 compute-0 sudo[63472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:54 compute-0 python3.9[63474]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:54 compute-0 sudo[63472]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:54 compute-0 sudo[63624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgbeolbmanvthuaotsyhcklpbmaabpln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323954.4326825-208-118706824056273/AnsiballZ_file.py'
Oct 01 13:05:54 compute-0 sudo[63624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:54 compute-0 python3.9[63626]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:55 compute-0 sudo[63624]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:55 compute-0 sudo[63776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgpapflarqugfuqoldozcmuerduzxewt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323955.219621-216-145922568579000/AnsiballZ_stat.py'
Oct 01 13:05:55 compute-0 sudo[63776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:55 compute-0 python3.9[63778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:55 compute-0 sudo[63776]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:56 compute-0 sudo[63899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eydklpzkcfhyzfxhrwfabpnlsmpmqawh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323955.219621-216-145922568579000/AnsiballZ_copy.py'
Oct 01 13:05:56 compute-0 sudo[63899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:56 compute-0 python3.9[63901]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323955.219621-216-145922568579000/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:56 compute-0 sudo[63899]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:57 compute-0 sudo[64051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viwptfdmtjxpuvkzamhtjsgxhsdvmpaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323957.041326-234-75533306215295/AnsiballZ_timezone.py'
Oct 01 13:05:57 compute-0 sudo[64051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:57 compute-0 python3.9[64053]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 01 13:05:57 compute-0 systemd[1]: Starting Time & Date Service...
Oct 01 13:05:57 compute-0 systemd[1]: Started Time & Date Service.
Oct 01 13:05:57 compute-0 sudo[64051]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:58 compute-0 sudo[64207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hniudyahpvgofqmdzbbucxjtkuufuvct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323958.123161-243-243513317922472/AnsiballZ_file.py'
Oct 01 13:05:58 compute-0 sudo[64207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:58 compute-0 python3.9[64209]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:58 compute-0 sudo[64207]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:59 compute-0 sudo[64359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrsilizdcqvxlgjktjhiwtlscarwrwol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323958.7204382-251-26978613087359/AnsiballZ_stat.py'
Oct 01 13:05:59 compute-0 sudo[64359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:59 compute-0 python3.9[64361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:05:59 compute-0 sudo[64359]: pam_unix(sudo:session): session closed for user root
Oct 01 13:05:59 compute-0 sudo[64484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqhbjwxcnttmoprfdkhyeljoaeemurux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323958.7204382-251-26978613087359/AnsiballZ_copy.py'
Oct 01 13:05:59 compute-0 sudo[64484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:05:59 compute-0 sshd-session[64385]: Invalid user seekcy from 200.7.101.139 port 60176
Oct 01 13:05:59 compute-0 python3.9[64486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323958.7204382-251-26978613087359/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:05:59 compute-0 sshd-session[64385]: Received disconnect from 200.7.101.139 port 60176:11: Bye Bye [preauth]
Oct 01 13:05:59 compute-0 sshd-session[64385]: Disconnected from invalid user seekcy 200.7.101.139 port 60176 [preauth]
Oct 01 13:06:00 compute-0 sudo[64484]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:00 compute-0 sudo[64636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cewollkdckkongnjpnegjsdbxinhjowd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323960.2547834-266-251211334890083/AnsiballZ_stat.py'
Oct 01 13:06:00 compute-0 sudo[64636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:00 compute-0 python3.9[64638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:06:00 compute-0 sudo[64636]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:01 compute-0 sudo[64759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdccgqdafxwzgjakyjevafscqlwsbscn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323960.2547834-266-251211334890083/AnsiballZ_copy.py'
Oct 01 13:06:01 compute-0 sudo[64759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:01 compute-0 python3.9[64761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759323960.2547834-266-251211334890083/.source.yaml _original_basename=.av8rd0yi follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:01 compute-0 sudo[64759]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:01 compute-0 sudo[64911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chfxfuvukytigyjgddihvaujqbycfoki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323961.6047213-281-161598109529113/AnsiballZ_stat.py'
Oct 01 13:06:01 compute-0 sudo[64911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:02 compute-0 python3.9[64913]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:06:02 compute-0 sudo[64911]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:02 compute-0 sudo[65034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bszhqpkhqjwrfwnihutglkseqoubnazt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323961.6047213-281-161598109529113/AnsiballZ_copy.py'
Oct 01 13:06:02 compute-0 sudo[65034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:02 compute-0 python3.9[65036]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323961.6047213-281-161598109529113/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:02 compute-0 sudo[65034]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:03 compute-0 sudo[65186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhizwjkgcbqnsvwyacshznnhsfvqmhtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323962.997391-296-66485107991311/AnsiballZ_command.py'
Oct 01 13:06:03 compute-0 sudo[65186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:03 compute-0 python3.9[65188]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:06:03 compute-0 sudo[65186]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:04 compute-0 sudo[65339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viuapnpjjklwbixfhaaeifdhamugewgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323964.0261974-304-214315220581639/AnsiballZ_command.py'
Oct 01 13:06:04 compute-0 sudo[65339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:04 compute-0 python3.9[65341]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:06:04 compute-0 sudo[65339]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:05 compute-0 sudo[65492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iymuaqnnpskchhzdxxszhjibyvlfsfvp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759323964.7575665-312-205323604031931/AnsiballZ_edpm_nftables_from_files.py'
Oct 01 13:06:05 compute-0 sudo[65492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:05 compute-0 python3[65494]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 01 13:06:05 compute-0 sudo[65492]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:06 compute-0 sudo[65644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trbibpiyugyvmeobqbghunfuivrlcscv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323965.712362-320-77308118860645/AnsiballZ_stat.py'
Oct 01 13:06:06 compute-0 sudo[65644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:06 compute-0 python3.9[65646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:06:06 compute-0 sudo[65644]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:06 compute-0 sudo[65767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtbjzrtiafquyhffcrsaputxqnekaakr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323965.712362-320-77308118860645/AnsiballZ_copy.py'
Oct 01 13:06:06 compute-0 sudo[65767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:06 compute-0 python3.9[65769]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323965.712362-320-77308118860645/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:06 compute-0 sudo[65767]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:07 compute-0 sudo[65919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwiwuaivebhqzwkppfijrlmcrxcpgooe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323966.9853039-335-66973344245041/AnsiballZ_stat.py'
Oct 01 13:06:07 compute-0 sudo[65919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:07 compute-0 python3.9[65921]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:06:07 compute-0 sudo[65919]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:07 compute-0 sudo[66042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epappqsvopqxxcedtvsfjxjaovwvzxrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323966.9853039-335-66973344245041/AnsiballZ_copy.py'
Oct 01 13:06:07 compute-0 sudo[66042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:08 compute-0 python3.9[66044]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323966.9853039-335-66973344245041/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:08 compute-0 sudo[66042]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:08 compute-0 sudo[66194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gedmdyamhnztagyfwogimvsanedldcny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323968.3360667-350-234888225112680/AnsiballZ_stat.py'
Oct 01 13:06:08 compute-0 sudo[66194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:08 compute-0 python3.9[66196]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:06:08 compute-0 sudo[66194]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:09 compute-0 sudo[66317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtjsoazijoibeoqyiyckebyzwycrlflk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323968.3360667-350-234888225112680/AnsiballZ_copy.py'
Oct 01 13:06:09 compute-0 sudo[66317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:09 compute-0 python3.9[66319]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323968.3360667-350-234888225112680/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:09 compute-0 sudo[66317]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:10 compute-0 sudo[66469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxbjikxnrmtzzvwghffdtrwspnuzpter ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323969.7851064-365-28532846496043/AnsiballZ_stat.py'
Oct 01 13:06:10 compute-0 sudo[66469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:10 compute-0 python3.9[66471]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:06:10 compute-0 sudo[66469]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:10 compute-0 sudo[66592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlzibvmofdszqpuqgtbjqgkufdkvnbyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323969.7851064-365-28532846496043/AnsiballZ_copy.py'
Oct 01 13:06:10 compute-0 sudo[66592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:10 compute-0 python3.9[66594]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323969.7851064-365-28532846496043/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:10 compute-0 sudo[66592]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:11 compute-0 sudo[66744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqmrzzbkwoaznjeuqzaxxodkmjdakiiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323971.1457484-380-80316002628247/AnsiballZ_stat.py'
Oct 01 13:06:11 compute-0 sudo[66744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:11 compute-0 python3.9[66746]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:06:11 compute-0 sudo[66744]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:12 compute-0 sudo[66867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zabluincbceyiogxzgvhjfszeplxnfdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323971.1457484-380-80316002628247/AnsiballZ_copy.py'
Oct 01 13:06:12 compute-0 sudo[66867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:12 compute-0 python3.9[66869]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759323971.1457484-380-80316002628247/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:12 compute-0 sudo[66867]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:12 compute-0 sudo[67021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdwkcqbiddddblqbndsgkvjtadvpuyol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323972.5020473-395-184866022683318/AnsiballZ_file.py'
Oct 01 13:06:12 compute-0 sudo[67021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:12 compute-0 python3.9[67023]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:13 compute-0 sudo[67021]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:13 compute-0 sudo[67173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnlhslvhbqufyuecrwnyyukqudrgmwak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323973.21221-403-223331968848863/AnsiballZ_command.py'
Oct 01 13:06:13 compute-0 sudo[67173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:13 compute-0 python3.9[67175]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:06:13 compute-0 sudo[67173]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:14 compute-0 sudo[67332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypvqdqsuhafoleckilyeymnrtcqdpgpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323974.0187511-411-65921600263499/AnsiballZ_blockinfile.py'
Oct 01 13:06:14 compute-0 sudo[67332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:14 compute-0 python3.9[67334]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:14 compute-0 sudo[67332]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:15 compute-0 sudo[67485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmjwzfddidvmmyikwlgidhzmtgiwwnym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323974.9958103-420-68831718174719/AnsiballZ_file.py'
Oct 01 13:06:15 compute-0 sudo[67485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:15 compute-0 python3.9[67487]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:15 compute-0 sudo[67485]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:16 compute-0 sudo[67637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oorthhpericbattzuxdjvgknrunwncxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323975.7381155-420-16255444514120/AnsiballZ_file.py'
Oct 01 13:06:16 compute-0 sudo[67637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:16 compute-0 python3.9[67639]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:16 compute-0 sudo[67637]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:16 compute-0 sshd-session[66966]: Connection closed by 14.103.127.7 port 45512 [preauth]
Oct 01 13:06:16 compute-0 sudo[67789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmwrngkwmisrnjwocssijgouwtyyzibd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323976.4881039-435-99771858916680/AnsiballZ_mount.py'
Oct 01 13:06:16 compute-0 sudo[67789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:17 compute-0 python3.9[67791]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 01 13:06:17 compute-0 sudo[67789]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:17 compute-0 sudo[67942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsckqfpkmxktqykgsgqonozdtsnotouz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323977.3577838-435-74465647443014/AnsiballZ_mount.py'
Oct 01 13:06:17 compute-0 sudo[67942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:17 compute-0 python3.9[67944]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 01 13:06:17 compute-0 sudo[67942]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:18 compute-0 sshd-session[60109]: Connection closed by 192.168.122.30 port 40490
Oct 01 13:06:18 compute-0 sshd-session[60106]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:06:18 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Oct 01 13:06:18 compute-0 systemd[1]: session-14.scope: Consumed 34.566s CPU time.
Oct 01 13:06:18 compute-0 systemd-logind[818]: Session 14 logged out. Waiting for processes to exit.
Oct 01 13:06:18 compute-0 systemd-logind[818]: Removed session 14.
Oct 01 13:06:23 compute-0 sshd-session[67970]: Accepted publickey for zuul from 192.168.122.30 port 37898 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:06:23 compute-0 systemd-logind[818]: New session 15 of user zuul.
Oct 01 13:06:23 compute-0 systemd[1]: Started Session 15 of User zuul.
Oct 01 13:06:23 compute-0 sshd-session[67970]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:06:24 compute-0 sudo[68125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogftlanpazmgvfkyvnkakkasyttqvlyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323983.668909-16-170315673783624/AnsiballZ_tempfile.py'
Oct 01 13:06:24 compute-0 sudo[68125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:24 compute-0 python3.9[68127]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 01 13:06:24 compute-0 sudo[68125]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:24 compute-0 sshd-session[68108]: Received disconnect from 156.236.31.46 port 43484:11: Bye Bye [preauth]
Oct 01 13:06:24 compute-0 sshd-session[68108]: Disconnected from authenticating user root 156.236.31.46 port 43484 [preauth]
Oct 01 13:06:24 compute-0 sudo[68277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xommshsogviilumwsmtuhwvuxrgeipej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323984.5933354-28-145195696646039/AnsiballZ_stat.py'
Oct 01 13:06:24 compute-0 sudo[68277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:25 compute-0 python3.9[68279]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:06:25 compute-0 sudo[68277]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:25 compute-0 sudo[68431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flhuogeaqpltvzdggqfxwqeehegenrmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323985.389664-38-51759785931020/AnsiballZ_setup.py'
Oct 01 13:06:25 compute-0 sudo[68431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:26 compute-0 python3.9[68433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:06:26 compute-0 sudo[68431]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:26 compute-0 sshd-session[68356]: Invalid user console from 80.253.31.232 port 60668
Oct 01 13:06:26 compute-0 sshd-session[68356]: Received disconnect from 80.253.31.232 port 60668:11: Bye Bye [preauth]
Oct 01 13:06:26 compute-0 sshd-session[68356]: Disconnected from invalid user console 80.253.31.232 port 60668 [preauth]
Oct 01 13:06:27 compute-0 sudo[68583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kertmlauflvplicnvnmqvnopkvvgadbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323986.552436-47-84904340763880/AnsiballZ_blockinfile.py'
Oct 01 13:06:27 compute-0 sudo[68583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:27 compute-0 python3.9[68585]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQuc3bhfyzL595OFOLV247IpwwrNv1jbuEyuIMlhGVL9o/JSyWTFuOVfeOlp2bgaV1HmT029a0g6F2wKmJyCLyTmUlSHjvFu+5OYahUrcWRA5wdTNonHdPtV7OxmGUyid1pIpbNVNRW3jpvnxoiRnI9We0KEWETWj0KsbyuQEnHthqnNEbvu9ZDWHKO3WwnNiEt4TvlIrnPpVac+Q9mG4Iqcsl1qDYx9ZKPuVLtYXvEtxENwTCfYUN7Nt9v/5SUlGTGxFlLR/tBKFw98HNvii7zAkpst6QHrOpcFmWYO6LMkxVjz0aIZvNUsbfKtfnSgjUBuC6Oy/QuzhKisWbFqPENpGofP9VCenS2zfCHewrnjhYCM6/NX7PzTVH0vkxCO2C5+xXm6HIvDZPnYfSL50+z5xfZXpuB7I8mKze82lkWdpFMkvmglXmjoEQgmrbl5kPRhq0yteRkbyyR6B/0X02dml1bPXU3azBrbTQNImgJeKRX8yZGL3Bbsfl5VMT+r8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGgRSLYQNGHBrZk4XBkcn+kfWXhVXnPjRWsejgHIwyOG
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQp4ff+5X+OCwYApPStN8XgACWS/2O/jZ6Xj4flPyrz/owAZoGD9kAYm/48KAYQYbXLvyoq8TZyZOgBYKe6Lcs=
                                             create=True mode=0644 path=/tmp/ansible.a2z4nrel state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:27 compute-0 sudo[68583]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:27 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 01 13:06:27 compute-0 sudo[68738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfcmtjpviefwawjgznqteycdalaoaetd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323987.4665248-55-37636015159788/AnsiballZ_command.py'
Oct 01 13:06:27 compute-0 sudo[68738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:28 compute-0 python3.9[68740]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.a2z4nrel' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:06:28 compute-0 sudo[68738]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:28 compute-0 sudo[68893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdplvyptvmyhvwmjdydllffhtoyexxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323988.453644-63-237560389501465/AnsiballZ_file.py'
Oct 01 13:06:28 compute-0 sudo[68893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:29 compute-0 python3.9[68895]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.a2z4nrel state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:29 compute-0 sudo[68893]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:29 compute-0 sshd-session[67973]: Connection closed by 192.168.122.30 port 37898
Oct 01 13:06:29 compute-0 sshd-session[67970]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:06:29 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 01 13:06:29 compute-0 systemd[1]: session-15.scope: Consumed 3.617s CPU time.
Oct 01 13:06:29 compute-0 systemd-logind[818]: Session 15 logged out. Waiting for processes to exit.
Oct 01 13:06:29 compute-0 systemd-logind[818]: Removed session 15.
Oct 01 13:06:34 compute-0 sshd-session[68920]: Accepted publickey for zuul from 192.168.122.30 port 55768 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:06:34 compute-0 systemd-logind[818]: New session 16 of user zuul.
Oct 01 13:06:34 compute-0 systemd[1]: Started Session 16 of User zuul.
Oct 01 13:06:34 compute-0 sshd-session[68920]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:06:35 compute-0 python3.9[69073]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:06:36 compute-0 sudo[69227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wturlnmhojqwjojtlofrlrnahuvqwefg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323996.1048613-32-63049671306568/AnsiballZ_systemd.py'
Oct 01 13:06:36 compute-0 sudo[69227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:37 compute-0 python3.9[69229]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 01 13:06:37 compute-0 sudo[69227]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:37 compute-0 sudo[69381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvnwgwchkdwacxmspdnilqifmorivogx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323997.3047285-40-186722813478634/AnsiballZ_systemd.py'
Oct 01 13:06:37 compute-0 sudo[69381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:37 compute-0 python3.9[69383]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:06:37 compute-0 sudo[69381]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:38 compute-0 sudo[69534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-godvsorlkcrdievsyadtdnuhryfwpnzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323998.1779087-49-86404584894288/AnsiballZ_command.py'
Oct 01 13:06:38 compute-0 sudo[69534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:38 compute-0 python3.9[69536]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:06:38 compute-0 sudo[69534]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:39 compute-0 sudo[69687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tinwbqffghlyrsgqygnylhhtoacnahlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323999.0908148-57-42070402267374/AnsiballZ_stat.py'
Oct 01 13:06:39 compute-0 sudo[69687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:39 compute-0 python3.9[69689]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:06:39 compute-0 sudo[69687]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:40 compute-0 sudo[69841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njqkrtuvlsbyzynklttkrjouohrfvcne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759323999.9386895-65-78698855278392/AnsiballZ_command.py'
Oct 01 13:06:40 compute-0 sudo[69841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:40 compute-0 python3.9[69843]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:06:40 compute-0 sudo[69841]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:40 compute-0 sudo[69996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmpmrbkutjunttsxjrvthgwucxokqaic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324000.515555-73-268734919859995/AnsiballZ_file.py'
Oct 01 13:06:40 compute-0 sudo[69996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:41 compute-0 python3.9[69998]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:06:41 compute-0 sudo[69996]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:41 compute-0 sshd-session[68923]: Connection closed by 192.168.122.30 port 55768
Oct 01 13:06:41 compute-0 sshd-session[68920]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:06:41 compute-0 systemd-logind[818]: Session 16 logged out. Waiting for processes to exit.
Oct 01 13:06:41 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 01 13:06:41 compute-0 systemd[1]: session-16.scope: Consumed 4.620s CPU time.
Oct 01 13:06:41 compute-0 systemd-logind[818]: Removed session 16.
Oct 01 13:06:46 compute-0 sshd-session[70023]: Received disconnect from 27.254.137.144 port 48148:11: Bye Bye [preauth]
Oct 01 13:06:46 compute-0 sshd-session[70023]: Disconnected from authenticating user root 27.254.137.144 port 48148 [preauth]
Oct 01 13:06:46 compute-0 sshd-session[70025]: Accepted publickey for zuul from 192.168.122.30 port 58044 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:06:46 compute-0 systemd-logind[818]: New session 17 of user zuul.
Oct 01 13:06:46 compute-0 systemd[1]: Started Session 17 of User zuul.
Oct 01 13:06:46 compute-0 sshd-session[70025]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:06:48 compute-0 python3.9[70178]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:06:48 compute-0 sudo[70332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvnyjusrfmoexbmjaxudhkcuwfcvgplz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324008.4606233-34-62011775034143/AnsiballZ_setup.py'
Oct 01 13:06:48 compute-0 sudo[70332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:49 compute-0 python3.9[70334]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:06:49 compute-0 sudo[70332]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:49 compute-0 sudo[70416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvhlltkkcghgqsxgxirbdcbvdtvutlga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324008.4606233-34-62011775034143/AnsiballZ_dnf.py'
Oct 01 13:06:49 compute-0 sudo[70416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:06:50 compute-0 python3.9[70418]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 01 13:06:51 compute-0 sudo[70416]: pam_unix(sudo:session): session closed for user root
Oct 01 13:06:52 compute-0 python3.9[70569]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:06:53 compute-0 python3.9[70720]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 13:06:54 compute-0 python3.9[70870]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:06:54 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:06:55 compute-0 python3.9[71021]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:06:55 compute-0 sshd-session[70028]: Connection closed by 192.168.122.30 port 58044
Oct 01 13:06:55 compute-0 sshd-session[70025]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:06:55 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 01 13:06:55 compute-0 systemd[1]: session-17.scope: Consumed 6.135s CPU time.
Oct 01 13:06:55 compute-0 systemd-logind[818]: Session 17 logged out. Waiting for processes to exit.
Oct 01 13:06:55 compute-0 systemd-logind[818]: Removed session 17.
Oct 01 13:07:03 compute-0 sshd-session[71047]: Accepted publickey for zuul from 38.102.83.150 port 43256 ssh2: RSA SHA256:tSx7W6G1Z7aOy2GAa2AuzDc8oXNjA1+IQNz1loW/bEk
Oct 01 13:07:03 compute-0 systemd-logind[818]: New session 18 of user zuul.
Oct 01 13:07:03 compute-0 systemd[1]: Started Session 18 of User zuul.
Oct 01 13:07:03 compute-0 sshd-session[71047]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:07:03 compute-0 sudo[71123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqybeyupsaglhguvlxhmbhmcqyzutexj ; /usr/bin/python3'
Oct 01 13:07:03 compute-0 sudo[71123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:04 compute-0 useradd[71127]: new group: name=ceph-admin, GID=42478
Oct 01 13:07:04 compute-0 useradd[71127]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 01 13:07:04 compute-0 sudo[71123]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:05 compute-0 sudo[71209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpvuhbtikzyrmtcpvpudhenbwcbfotxc ; /usr/bin/python3'
Oct 01 13:07:05 compute-0 sudo[71209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:05 compute-0 sudo[71209]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:05 compute-0 sudo[71282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikqyjwnklqvjakjopktjhqrxdhuxsaik ; /usr/bin/python3'
Oct 01 13:07:05 compute-0 sudo[71282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:05 compute-0 sudo[71282]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:06 compute-0 sudo[71332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kghukhlirxjdqnkwmrmbxarmnqmebcoe ; /usr/bin/python3'
Oct 01 13:07:06 compute-0 sudo[71332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:06 compute-0 sudo[71332]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:06 compute-0 sudo[71358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvmtlymhpmmdqkvujjtbsnhbbrwcfgnd ; /usr/bin/python3'
Oct 01 13:07:06 compute-0 sudo[71358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:06 compute-0 sudo[71358]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:07 compute-0 sudo[71384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uugjvduvvsclqabyxzkunxzrvjsawpok ; /usr/bin/python3'
Oct 01 13:07:07 compute-0 sudo[71384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:07 compute-0 sudo[71384]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:07 compute-0 sudo[71411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkykaqpqsawceennapvygasbqhohvbfj ; /usr/bin/python3'
Oct 01 13:07:07 compute-0 sudo[71411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:07 compute-0 sudo[71411]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:08 compute-0 sudo[71489]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dakzlyhegvwjyrqfrawuaapocdsorcud ; /usr/bin/python3'
Oct 01 13:07:08 compute-0 sudo[71489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:08 compute-0 sudo[71489]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:08 compute-0 sudo[71562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekowigpxlfxszxzkdqcmdtwowqvqwiwn ; /usr/bin/python3'
Oct 01 13:07:08 compute-0 sudo[71562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:08 compute-0 sudo[71562]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:09 compute-0 sudo[71664]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwiqarowwgkzyyrtwpxtfiezjxgkmbvr ; /usr/bin/python3'
Oct 01 13:07:09 compute-0 sudo[71664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:09 compute-0 sudo[71664]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:09 compute-0 sudo[71737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zydggkjxkyffhskjpxcusgkvtewbzdwl ; /usr/bin/python3'
Oct 01 13:07:09 compute-0 sudo[71737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:09 compute-0 sudo[71737]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:10 compute-0 sudo[71787]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzxvcvqrieokylfpfzcjhiutzilzuswc ; /usr/bin/python3'
Oct 01 13:07:10 compute-0 sudo[71787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:10 compute-0 python3[71789]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:07:11 compute-0 sudo[71787]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:11 compute-0 sudo[71882]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkblctixvicmtnzduamfmmdudwkgzhxa ; /usr/bin/python3'
Oct 01 13:07:11 compute-0 sudo[71882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:12 compute-0 python3[71884]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 13:07:13 compute-0 sudo[71882]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:13 compute-0 sudo[71909]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swfbizcbjeuazmqekitcfmcyhrgvxawb ; /usr/bin/python3'
Oct 01 13:07:13 compute-0 sudo[71909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:13 compute-0 python3[71911]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:07:13 compute-0 sudo[71909]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:13 compute-0 sudo[71935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lusjaiokrdcxdjgcwpgehmbddjkecgpc ; /usr/bin/python3'
Oct 01 13:07:13 compute-0 sudo[71935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:13 compute-0 python3[71937]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:07:13 compute-0 kernel: loop: module loaded
Oct 01 13:07:13 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct 01 13:07:13 compute-0 sudo[71935]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:14 compute-0 sudo[71970]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdrngknfwoqdfamixcwhkmlgqmpxwzyl ; /usr/bin/python3'
Oct 01 13:07:14 compute-0 sudo[71970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:14 compute-0 python3[71972]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:07:14 compute-0 lvm[71975]: PV /dev/loop3 not used.
Oct 01 13:07:14 compute-0 lvm[71977]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 13:07:14 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 01 13:07:14 compute-0 lvm[71983]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 01 13:07:14 compute-0 lvm[71987]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 13:07:14 compute-0 lvm[71987]: VG ceph_vg0 finished
Oct 01 13:07:14 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 01 13:07:14 compute-0 sudo[71970]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:14 compute-0 sudo[72063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wffiyeplmfhefayisghvadaryojlybal ; /usr/bin/python3'
Oct 01 13:07:14 compute-0 sudo[72063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:15 compute-0 python3[72065]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 13:07:15 compute-0 sudo[72063]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:15 compute-0 sudo[72136]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrhhurfenhhrfxzhjraycybvrlkgjukh ; /usr/bin/python3'
Oct 01 13:07:15 compute-0 sudo[72136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:15 compute-0 python3[72138]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324034.7656996-33487-142021680194507/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:07:15 compute-0 sudo[72136]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:16 compute-0 sudo[72186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcmwhqqmjteffskbbjwhabjqiogxsknp ; /usr/bin/python3'
Oct 01 13:07:16 compute-0 sudo[72186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:16 compute-0 python3[72188]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:07:17 compute-0 systemd[1]: Reloading.
Oct 01 13:07:17 compute-0 systemd-rc-local-generator[72220]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:07:17 compute-0 systemd-sysv-generator[72223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:07:17 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 01 13:07:17 compute-0 bash[72229]: /dev/loop3: [64513]:4328141 (/var/lib/ceph-osd-0.img)
Oct 01 13:07:17 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 01 13:07:17 compute-0 sudo[72186]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:17 compute-0 lvm[72231]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 13:07:17 compute-0 lvm[72231]: VG ceph_vg0 finished
Oct 01 13:07:17 compute-0 sudo[72257]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwhxbbgjmrziybhlijxopmnltajflyag ; /usr/bin/python3'
Oct 01 13:07:17 compute-0 sudo[72257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:18 compute-0 python3[72259]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 13:07:18 compute-0 sshd-session[72232]: Received disconnect from 200.7.101.139 port 43402:11: Bye Bye [preauth]
Oct 01 13:07:18 compute-0 sshd-session[72232]: Disconnected from authenticating user root 200.7.101.139 port 43402 [preauth]
Oct 01 13:07:19 compute-0 sudo[72257]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:19 compute-0 sudo[72284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htxqprlhfhhwqvlturzvuebpzzqrnclq ; /usr/bin/python3'
Oct 01 13:07:19 compute-0 sudo[72284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:19 compute-0 python3[72286]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:07:19 compute-0 sudo[72284]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:20 compute-0 sudo[72310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqgiotxlhofiyszxetjplnudownwpnhe ; /usr/bin/python3'
Oct 01 13:07:20 compute-0 sudo[72310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:20 compute-0 python3[72312]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:07:20 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Oct 01 13:07:20 compute-0 sudo[72310]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:20 compute-0 sudo[72341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddkvnudzlkhojhhjayxfjxwmduxfvgnx ; /usr/bin/python3'
Oct 01 13:07:20 compute-0 sudo[72341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:20 compute-0 python3[72343]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:07:20 compute-0 lvm[72346]: PV /dev/loop4 not used.
Oct 01 13:07:20 compute-0 lvm[72355]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 13:07:20 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct 01 13:07:20 compute-0 sudo[72341]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:20 compute-0 lvm[72357]:   1 logical volume(s) in volume group "ceph_vg1" now active
Oct 01 13:07:20 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct 01 13:07:21 compute-0 chronyd[54836]: Selected source 138.197.135.239 (pool.ntp.org)
Oct 01 13:07:21 compute-0 sudo[72433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgqhpwzoissgbnlgltzswzfewnoicikk ; /usr/bin/python3'
Oct 01 13:07:21 compute-0 sudo[72433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:21 compute-0 python3[72435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 13:07:21 compute-0 sudo[72433]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:21 compute-0 sudo[72506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-getwshjmdynwvwkzlmvisetidvozkpuh ; /usr/bin/python3'
Oct 01 13:07:21 compute-0 sudo[72506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:21 compute-0 python3[72508]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324040.9066064-33514-2733799240575/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:07:21 compute-0 sudo[72506]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:21 compute-0 sudo[72556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnrnrprpkjsvkzuytikbjngizdvqbhup ; /usr/bin/python3'
Oct 01 13:07:21 compute-0 sudo[72556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:22 compute-0 python3[72558]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:07:22 compute-0 systemd[1]: Reloading.
Oct 01 13:07:22 compute-0 systemd-rc-local-generator[72588]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:07:22 compute-0 systemd-sysv-generator[72591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:07:22 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 01 13:07:22 compute-0 bash[72598]: /dev/loop4: [64513]:4328191 (/var/lib/ceph-osd-1.img)
Oct 01 13:07:22 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 01 13:07:22 compute-0 sudo[72556]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:22 compute-0 lvm[72600]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 13:07:22 compute-0 lvm[72600]: VG ceph_vg1 finished
Oct 01 13:07:22 compute-0 sudo[72624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdclsnqhwmioynrsyrqbjtwsqwtoodjf ; /usr/bin/python3'
Oct 01 13:07:22 compute-0 sudo[72624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:22 compute-0 python3[72626]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 13:07:24 compute-0 sudo[72624]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:24 compute-0 sudo[72651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uflanjozsvrbpiylguhzbqndzpnfjbrv ; /usr/bin/python3'
Oct 01 13:07:24 compute-0 sudo[72651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:24 compute-0 python3[72653]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:07:24 compute-0 sudo[72651]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:24 compute-0 sudo[72679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yarexapczaeqgjrtcikntontvwkniqrf ; /usr/bin/python3'
Oct 01 13:07:24 compute-0 sudo[72679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:24 compute-0 python3[72681]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:07:24 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Oct 01 13:07:24 compute-0 sudo[72679]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:24 compute-0 sudo[72710]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhbdxnavcooiyzsrzfurrkholejspcyx ; /usr/bin/python3'
Oct 01 13:07:24 compute-0 sudo[72710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:25 compute-0 python3[72712]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:07:25 compute-0 lvm[72715]: PV /dev/loop5 not used.
Oct 01 13:07:25 compute-0 sshd-session[72667]: Invalid user ftpuser from 80.253.31.232 port 39790
Oct 01 13:07:25 compute-0 lvm[72717]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 13:07:25 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct 01 13:07:25 compute-0 sshd-session[72667]: Received disconnect from 80.253.31.232 port 39790:11: Bye Bye [preauth]
Oct 01 13:07:25 compute-0 sshd-session[72667]: Disconnected from invalid user ftpuser 80.253.31.232 port 39790 [preauth]
Oct 01 13:07:25 compute-0 lvm[72719]:   1 logical volume(s) in volume group "ceph_vg2" now active
Oct 01 13:07:25 compute-0 lvm[72727]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 13:07:25 compute-0 lvm[72727]: VG ceph_vg2 finished
Oct 01 13:07:25 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct 01 13:07:25 compute-0 sudo[72710]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:25 compute-0 sudo[72803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klzwwkuftnohdqogdyakrngytflwghwi ; /usr/bin/python3'
Oct 01 13:07:25 compute-0 sudo[72803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:25 compute-0 python3[72805]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 13:07:26 compute-0 sudo[72803]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:26 compute-0 sudo[72876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeueezxdkpeqxviyljoicpkewxtcumpg ; /usr/bin/python3'
Oct 01 13:07:26 compute-0 sudo[72876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:26 compute-0 python3[72878]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324045.692942-33541-258765949831840/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:07:26 compute-0 sudo[72876]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:26 compute-0 sudo[72926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avgddvksfisgrjfcokhofqchazmfyzks ; /usr/bin/python3'
Oct 01 13:07:26 compute-0 sudo[72926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:26 compute-0 python3[72928]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:07:26 compute-0 systemd[1]: Reloading.
Oct 01 13:07:27 compute-0 systemd-rc-local-generator[72951]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:07:27 compute-0 systemd-sysv-generator[72959]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:07:27 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct 01 13:07:27 compute-0 bash[72969]: /dev/loop5: [64513]:4328604 (/var/lib/ceph-osd-2.img)
Oct 01 13:07:27 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct 01 13:07:27 compute-0 sudo[72926]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:27 compute-0 lvm[72971]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 13:07:27 compute-0 lvm[72971]: VG ceph_vg2 finished
Oct 01 13:07:29 compute-0 python3[72995]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:07:31 compute-0 sudo[73086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgxafglyyqlqzolxansuxovwpovfegfz ; /usr/bin/python3'
Oct 01 13:07:31 compute-0 sudo[73086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:31 compute-0 python3[73088]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 01 13:07:32 compute-0 groupadd[73094]: group added to /etc/group: name=cephadm, GID=992
Oct 01 13:07:32 compute-0 groupadd[73094]: group added to /etc/gshadow: name=cephadm
Oct 01 13:07:32 compute-0 groupadd[73094]: new group: name=cephadm, GID=992
Oct 01 13:07:32 compute-0 useradd[73101]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Oct 01 13:07:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 13:07:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 13:07:33 compute-0 sudo[73086]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:33 compute-0 sudo[73200]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccwnwmmxqiyhunhodjumwjqwfxkxqxtw ; /usr/bin/python3'
Oct 01 13:07:33 compute-0 sudo[73200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:33 compute-0 python3[73202]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:07:33 compute-0 sudo[73200]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 13:07:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 13:07:33 compute-0 systemd[1]: run-rbc45b61195084f5fae5d5e7be7c8a17a.service: Deactivated successfully.
Oct 01 13:07:33 compute-0 sudo[73231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzdlsvxdijvtengswguxvaergaikrfbx ; /usr/bin/python3'
Oct 01 13:07:33 compute-0 sudo[73231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:34 compute-0 python3[73233]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:07:34 compute-0 sshd-session[73203]: Invalid user TestUser from 156.236.31.46 port 43570
Oct 01 13:07:34 compute-0 sshd-session[73203]: Received disconnect from 156.236.31.46 port 43570:11: Bye Bye [preauth]
Oct 01 13:07:34 compute-0 sshd-session[73203]: Disconnected from invalid user TestUser 156.236.31.46 port 43570 [preauth]
Oct 01 13:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:34 compute-0 sudo[73231]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:34 compute-0 sudo[73294]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pznuvpkrbnwjszqoucsfyydzppnwacdn ; /usr/bin/python3'
Oct 01 13:07:34 compute-0 sudo[73294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:35 compute-0 python3[73296]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:07:35 compute-0 sudo[73294]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:35 compute-0 sudo[73320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irvikyannxztalsgmigaektazxtnxjha ; /usr/bin/python3'
Oct 01 13:07:35 compute-0 sudo[73320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:35 compute-0 python3[73322]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:07:35 compute-0 sudo[73320]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:35 compute-0 sudo[73398]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dojzxqqqirdgjfrapcqqtkmirwsmjvqr ; /usr/bin/python3'
Oct 01 13:07:35 compute-0 sudo[73398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:36 compute-0 python3[73400]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 13:07:36 compute-0 sudo[73398]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:36 compute-0 sudo[73471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-godjpslhucxmcnzmjfqueudapzuegvof ; /usr/bin/python3'
Oct 01 13:07:36 compute-0 sudo[73471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:36 compute-0 python3[73473]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324055.8135173-33688-268762967771436/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:07:36 compute-0 sudo[73471]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:37 compute-0 sudo[73573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltgjhrocytlxodyihziseysmosqzgzaf ; /usr/bin/python3'
Oct 01 13:07:37 compute-0 sudo[73573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:37 compute-0 python3[73575]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 13:07:37 compute-0 sudo[73573]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:37 compute-0 sudo[73646]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjarspdrzchhipvpasqtovtycgphvwhy ; /usr/bin/python3'
Oct 01 13:07:37 compute-0 sudo[73646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:37 compute-0 python3[73648]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324056.9729111-33706-15176687737777/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:07:37 compute-0 sudo[73646]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:38 compute-0 sudo[73696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhbnoovritisvsmijclgngbcjdjoictf ; /usr/bin/python3'
Oct 01 13:07:38 compute-0 sudo[73696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:38 compute-0 python3[73698]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:07:38 compute-0 sudo[73696]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:38 compute-0 sudo[73724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnbdbcmvjtiesrdxxpwedcxmulwtvtpp ; /usr/bin/python3'
Oct 01 13:07:38 compute-0 sudo[73724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:38 compute-0 python3[73726]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:07:38 compute-0 sudo[73724]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:38 compute-0 sudo[73752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbksearrvlyryvaylrpqawtdzobmcysi ; /usr/bin/python3'
Oct 01 13:07:38 compute-0 sudo[73752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:38 compute-0 python3[73754]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:07:38 compute-0 sudo[73752]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:39 compute-0 sudo[73780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwrbnlrhghwwnixtldipnpfhqswqemhp ; /usr/bin/python3'
Oct 01 13:07:39 compute-0 sudo[73780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:07:39 compute-0 python3[73782]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:39 compute-0 sshd-session[73798]: Accepted publickey for ceph-admin from 192.168.122.100 port 34550 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:07:39 compute-0 systemd-logind[818]: New session 19 of user ceph-admin.
Oct 01 13:07:39 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 01 13:07:39 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 01 13:07:39 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 01 13:07:39 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 01 13:07:39 compute-0 systemd[73802]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:07:39 compute-0 systemd[73802]: Queued start job for default target Main User Target.
Oct 01 13:07:39 compute-0 systemd[73802]: Created slice User Application Slice.
Oct 01 13:07:39 compute-0 systemd[73802]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 01 13:07:39 compute-0 systemd[73802]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 13:07:39 compute-0 systemd[73802]: Reached target Paths.
Oct 01 13:07:39 compute-0 systemd[73802]: Reached target Timers.
Oct 01 13:07:39 compute-0 systemd[73802]: Starting D-Bus User Message Bus Socket...
Oct 01 13:07:39 compute-0 systemd[73802]: Starting Create User's Volatile Files and Directories...
Oct 01 13:07:39 compute-0 systemd[73802]: Finished Create User's Volatile Files and Directories.
Oct 01 13:07:39 compute-0 systemd[73802]: Listening on D-Bus User Message Bus Socket.
Oct 01 13:07:39 compute-0 systemd[73802]: Reached target Sockets.
Oct 01 13:07:39 compute-0 systemd[73802]: Reached target Basic System.
Oct 01 13:07:39 compute-0 systemd[73802]: Reached target Main User Target.
Oct 01 13:07:39 compute-0 systemd[73802]: Startup finished in 170ms.
Oct 01 13:07:39 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 01 13:07:39 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Oct 01 13:07:39 compute-0 sshd-session[73798]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:07:39 compute-0 sudo[73818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Oct 01 13:07:39 compute-0 sudo[73818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:07:39 compute-0 sudo[73818]: pam_unix(sudo:session): session closed for user root
Oct 01 13:07:39 compute-0 sshd-session[73817]: Received disconnect from 192.168.122.100 port 34550:11: disconnected by user
Oct 01 13:07:39 compute-0 sshd-session[73817]: Disconnected from user ceph-admin 192.168.122.100 port 34550
Oct 01 13:07:39 compute-0 sshd-session[73798]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 01 13:07:39 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 01 13:07:39 compute-0 systemd-logind[818]: Session 19 logged out. Waiting for processes to exit.
Oct 01 13:07:39 compute-0 systemd-logind[818]: Removed session 19.
Oct 01 13:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1464513559-lower\x2dmapped.mount: Deactivated successfully.
Oct 01 13:07:50 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct 01 13:07:50 compute-0 systemd[73802]: Activating special unit Exit the Session...
Oct 01 13:07:50 compute-0 systemd[73802]: Stopped target Main User Target.
Oct 01 13:07:50 compute-0 systemd[73802]: Stopped target Basic System.
Oct 01 13:07:50 compute-0 systemd[73802]: Stopped target Paths.
Oct 01 13:07:50 compute-0 systemd[73802]: Stopped target Sockets.
Oct 01 13:07:50 compute-0 systemd[73802]: Stopped target Timers.
Oct 01 13:07:50 compute-0 systemd[73802]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 01 13:07:50 compute-0 systemd[73802]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 01 13:07:50 compute-0 systemd[73802]: Closed D-Bus User Message Bus Socket.
Oct 01 13:07:50 compute-0 systemd[73802]: Stopped Create User's Volatile Files and Directories.
Oct 01 13:07:50 compute-0 systemd[73802]: Removed slice User Application Slice.
Oct 01 13:07:50 compute-0 systemd[73802]: Reached target Shutdown.
Oct 01 13:07:50 compute-0 systemd[73802]: Finished Exit the Session.
Oct 01 13:07:50 compute-0 systemd[73802]: Reached target Exit the Session.
Oct 01 13:07:50 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct 01 13:07:50 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct 01 13:07:50 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 01 13:07:50 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 01 13:07:50 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 01 13:07:50 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 01 13:07:50 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct 01 13:07:53 compute-0 podman[73855]: 2025-10-01 13:07:53.43011535 +0000 UTC m=+13.380545553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:53 compute-0 podman[73922]: 2025-10-01 13:07:53.531841557 +0000 UTC m=+0.060812520 container create bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:07:53 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 01 13:07:53 compute-0 systemd[1]: Started libpod-conmon-bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a.scope.
Oct 01 13:07:53 compute-0 podman[73922]: 2025-10-01 13:07:53.507545807 +0000 UTC m=+0.036516750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:53 compute-0 podman[73922]: 2025-10-01 13:07:53.643297214 +0000 UTC m=+0.172268227 container init bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 13:07:53 compute-0 podman[73922]: 2025-10-01 13:07:53.650487162 +0000 UTC m=+0.179458095 container start bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:07:53 compute-0 podman[73922]: 2025-10-01 13:07:53.653997884 +0000 UTC m=+0.182968897 container attach bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:07:53 compute-0 elegant_margulis[73938]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 01 13:07:53 compute-0 systemd[1]: libpod-bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a.scope: Deactivated successfully.
Oct 01 13:07:53 compute-0 podman[73922]: 2025-10-01 13:07:53.963587576 +0000 UTC m=+0.492558499 container died bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c0be3d06252a15283f3636d0739b7f73d3cf0a8b7aec486ecb25e9fb55c09d-merged.mount: Deactivated successfully.
Oct 01 13:07:54 compute-0 podman[73922]: 2025-10-01 13:07:54.022376331 +0000 UTC m=+0.551347244 container remove bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a (image=quay.io/ceph/ceph:v18, name=elegant_margulis, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 13:07:54 compute-0 systemd[1]: libpod-conmon-bcf60002975c7ce04eaa785e564c561b133abc3dff79b5faa2c63c91b0b9b06a.scope: Deactivated successfully.
Oct 01 13:07:54 compute-0 podman[73957]: 2025-10-01 13:07:54.091521065 +0000 UTC m=+0.043566773 container create bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 13:07:54 compute-0 systemd[1]: Started libpod-conmon-bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6.scope.
Oct 01 13:07:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:54 compute-0 podman[73957]: 2025-10-01 13:07:54.158545691 +0000 UTC m=+0.110591419 container init bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:07:54 compute-0 podman[73957]: 2025-10-01 13:07:54.167804575 +0000 UTC m=+0.119850313 container start bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:07:54 compute-0 podman[73957]: 2025-10-01 13:07:54.071268493 +0000 UTC m=+0.023314211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:54 compute-0 vigilant_jemison[73972]: 167 167
Oct 01 13:07:54 compute-0 systemd[1]: libpod-bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6.scope: Deactivated successfully.
Oct 01 13:07:54 compute-0 podman[73957]: 2025-10-01 13:07:54.1720199 +0000 UTC m=+0.124065618 container attach bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:07:54 compute-0 podman[73957]: 2025-10-01 13:07:54.172410102 +0000 UTC m=+0.124455800 container died bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:07:54 compute-0 podman[73957]: 2025-10-01 13:07:54.211721258 +0000 UTC m=+0.163766986 container remove bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6 (image=quay.io/ceph/ceph:v18, name=vigilant_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:07:54 compute-0 systemd[1]: libpod-conmon-bc7c15dbf21f5b991399e0f422a4823978b508ef647ce17a4f1595d756e224c6.scope: Deactivated successfully.
Oct 01 13:07:54 compute-0 podman[73989]: 2025-10-01 13:07:54.295174777 +0000 UTC m=+0.052486787 container create 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 13:07:54 compute-0 systemd[1]: Started libpod-conmon-50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64.scope.
Oct 01 13:07:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:54 compute-0 podman[73989]: 2025-10-01 13:07:54.351273607 +0000 UTC m=+0.108585627 container init 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 13:07:54 compute-0 podman[73989]: 2025-10-01 13:07:54.359787307 +0000 UTC m=+0.117099337 container start 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:07:54 compute-0 podman[73989]: 2025-10-01 13:07:54.363269157 +0000 UTC m=+0.120581157 container attach 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:07:54 compute-0 podman[73989]: 2025-10-01 13:07:54.276631098 +0000 UTC m=+0.033943148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:54 compute-0 vigilant_maxwell[74006]: AQCqJ91oUnhwFhAABaeVGSJyDEVZ7+ahmpC9kw==
Oct 01 13:07:54 compute-0 systemd[1]: libpod-50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64.scope: Deactivated successfully.
Oct 01 13:07:54 compute-0 podman[73989]: 2025-10-01 13:07:54.379386099 +0000 UTC m=+0.136698099 container died 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:07:54 compute-0 podman[73989]: 2025-10-01 13:07:54.413530482 +0000 UTC m=+0.170842482 container remove 50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64 (image=quay.io/ceph/ceph:v18, name=vigilant_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:07:54 compute-0 systemd[1]: libpod-conmon-50ed018867cb4d0d16dfb40cb611a31d7533932e2b4d8d443b8370c5fb0f5a64.scope: Deactivated successfully.
Oct 01 13:07:54 compute-0 podman[74025]: 2025-10-01 13:07:54.49321595 +0000 UTC m=+0.048520711 container create 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:07:54 compute-0 systemd[1]: Started libpod-conmon-42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a.scope.
Oct 01 13:07:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:54 compute-0 podman[74025]: 2025-10-01 13:07:54.551765368 +0000 UTC m=+0.107070149 container init 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:07:54 compute-0 podman[74025]: 2025-10-01 13:07:54.556163157 +0000 UTC m=+0.111467918 container start 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:07:54 compute-0 podman[74025]: 2025-10-01 13:07:54.559539354 +0000 UTC m=+0.114844135 container attach 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:07:54 compute-0 podman[74025]: 2025-10-01 13:07:54.478119501 +0000 UTC m=+0.033424282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:54 compute-0 friendly_benz[74041]: AQCqJ91omZAjIxAAAr0DNz1fyp3+kL33rG2Ijg==
Oct 01 13:07:54 compute-0 systemd[1]: libpod-42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a.scope: Deactivated successfully.
Oct 01 13:07:54 compute-0 podman[74025]: 2025-10-01 13:07:54.594649999 +0000 UTC m=+0.149954760 container died 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-454afb3821cbde2544c55048ff76497c0d7c6b96b761424f6b5415f99d25ce35-merged.mount: Deactivated successfully.
Oct 01 13:07:54 compute-0 podman[74025]: 2025-10-01 13:07:54.624359341 +0000 UTC m=+0.179664102 container remove 42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a (image=quay.io/ceph/ceph:v18, name=friendly_benz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:54 compute-0 systemd[1]: libpod-conmon-42bfaab9659ced7d2e3ee73c26024568f90bec6fdf0a7b2930c1ba93ba7a8d8a.scope: Deactivated successfully.
Oct 01 13:07:54 compute-0 podman[74058]: 2025-10-01 13:07:54.689897681 +0000 UTC m=+0.044218945 container create aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:07:54 compute-0 systemd[1]: Started libpod-conmon-aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2.scope.
Oct 01 13:07:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:54 compute-0 podman[74058]: 2025-10-01 13:07:54.671035762 +0000 UTC m=+0.025357056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:55 compute-0 podman[74058]: 2025-10-01 13:07:55.097947098 +0000 UTC m=+0.452268392 container init aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:07:55 compute-0 podman[74058]: 2025-10-01 13:07:55.105497647 +0000 UTC m=+0.459818921 container start aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:07:55 compute-0 podman[74058]: 2025-10-01 13:07:55.110892028 +0000 UTC m=+0.465213302 container attach aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:07:55 compute-0 affectionate_cray[74077]: AQCrJ91o0O20BxAAbuNQlAgvDf2C/Y5Su5seBA==
Oct 01 13:07:55 compute-0 systemd[1]: libpod-aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2.scope: Deactivated successfully.
Oct 01 13:07:55 compute-0 podman[74058]: 2025-10-01 13:07:55.133188555 +0000 UTC m=+0.487509869 container died aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:07:55 compute-0 podman[74058]: 2025-10-01 13:07:55.180962911 +0000 UTC m=+0.535284205 container remove aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2 (image=quay.io/ceph/ceph:v18, name=affectionate_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:07:55 compute-0 systemd[1]: libpod-conmon-aedba631649b8969c1b6b4d0dc3bb91f0b6288376eaa2823e0540da4d7117bf2.scope: Deactivated successfully.
Oct 01 13:07:55 compute-0 podman[74097]: 2025-10-01 13:07:55.257430107 +0000 UTC m=+0.050855464 container create ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 13:07:55 compute-0 systemd[1]: Started libpod-conmon-ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0.scope.
Oct 01 13:07:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9468441a90a191f97c60dd7dfc5dda7211a4b8916ed54f73ddf537b983844191/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:55 compute-0 podman[74097]: 2025-10-01 13:07:55.323229825 +0000 UTC m=+0.116655242 container init ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:07:55 compute-0 podman[74097]: 2025-10-01 13:07:55.328473652 +0000 UTC m=+0.121899019 container start ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:07:55 compute-0 podman[74097]: 2025-10-01 13:07:55.234080396 +0000 UTC m=+0.027505763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:55 compute-0 podman[74097]: 2025-10-01 13:07:55.331954961 +0000 UTC m=+0.125380328 container attach ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:07:55 compute-0 awesome_matsumoto[74114]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct 01 13:07:55 compute-0 awesome_matsumoto[74114]: setting min_mon_release = pacific
Oct 01 13:07:55 compute-0 awesome_matsumoto[74114]: /usr/bin/monmaptool: set fsid to eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:07:55 compute-0 awesome_matsumoto[74114]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct 01 13:07:55 compute-0 systemd[1]: libpod-ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0.scope: Deactivated successfully.
Oct 01 13:07:55 compute-0 podman[74097]: 2025-10-01 13:07:55.36815526 +0000 UTC m=+0.161580607 container died ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:07:55 compute-0 podman[74097]: 2025-10-01 13:07:55.399740622 +0000 UTC m=+0.193165969 container remove ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0 (image=quay.io/ceph/ceph:v18, name=awesome_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:07:55 compute-0 systemd[1]: libpod-conmon-ec44cd41c7c6f0581a22a03a11a4e0eaf53408cf2822f01891fd19a26e632aa0.scope: Deactivated successfully.
Oct 01 13:07:55 compute-0 podman[74134]: 2025-10-01 13:07:55.465628354 +0000 UTC m=+0.043727729 container create e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:07:55 compute-0 systemd[1]: Started libpod-conmon-e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc.scope.
Oct 01 13:07:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:55 compute-0 podman[74134]: 2025-10-01 13:07:55.44597554 +0000 UTC m=+0.024074905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:55 compute-0 podman[74134]: 2025-10-01 13:07:55.556679092 +0000 UTC m=+0.134778477 container init e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:07:55 compute-0 podman[74134]: 2025-10-01 13:07:55.562234958 +0000 UTC m=+0.140334323 container start e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:07:55 compute-0 podman[74134]: 2025-10-01 13:07:55.565219213 +0000 UTC m=+0.143318598 container attach e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:07:55 compute-0 systemd[1]: libpod-e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc.scope: Deactivated successfully.
Oct 01 13:07:55 compute-0 podman[74134]: 2025-10-01 13:07:55.656340424 +0000 UTC m=+0.234439769 container died e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7f580bdb2013f7e1af51d19b8c79cc289a2256727d2b2a7af6f024c75a43ad7-merged.mount: Deactivated successfully.
Oct 01 13:07:55 compute-0 podman[74134]: 2025-10-01 13:07:55.689782876 +0000 UTC m=+0.267882231 container remove e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc (image=quay.io/ceph/ceph:v18, name=gifted_kapitsa, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:55 compute-0 systemd[1]: libpod-conmon-e0db59c4ff3d28fdfbc794254bfd8d57887c15304572590be2cfa6db77dfd8dc.scope: Deactivated successfully.
Oct 01 13:07:55 compute-0 systemd[1]: Reloading.
Oct 01 13:07:55 compute-0 systemd-rc-local-generator[74217]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:07:55 compute-0 systemd-sysv-generator[74220]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:07:55 compute-0 systemd[1]: Reloading.
Oct 01 13:07:56 compute-0 systemd-sysv-generator[74255]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:07:56 compute-0 systemd-rc-local-generator[74251]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:07:56 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct 01 13:07:56 compute-0 systemd[1]: Reloading.
Oct 01 13:07:56 compute-0 systemd-rc-local-generator[74284]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:07:56 compute-0 systemd-sysv-generator[74288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:07:56 compute-0 systemd[1]: Reached target Ceph cluster eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:07:56 compute-0 systemd[1]: Reloading.
Oct 01 13:07:56 compute-0 systemd-sysv-generator[74332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:07:56 compute-0 systemd-rc-local-generator[74326]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:07:56 compute-0 systemd[1]: Reloading.
Oct 01 13:07:56 compute-0 systemd-rc-local-generator[74366]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:07:56 compute-0 systemd-sysv-generator[74369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:07:56 compute-0 systemd[1]: Created slice Slice /system/ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:07:56 compute-0 systemd[1]: Reached target System Time Set.
Oct 01 13:07:56 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct 01 13:07:56 compute-0 systemd[1]: Starting Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:57 compute-0 podman[74425]: 2025-10-01 13:07:57.140163533 +0000 UTC m=+0.034974180 container create c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:57 compute-0 podman[74425]: 2025-10-01 13:07:57.201433258 +0000 UTC m=+0.096243985 container init c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:07:57 compute-0 podman[74425]: 2025-10-01 13:07:57.207108918 +0000 UTC m=+0.101919605 container start c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:07:57 compute-0 bash[74425]: c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008
Oct 01 13:07:57 compute-0 podman[74425]: 2025-10-01 13:07:57.124361502 +0000 UTC m=+0.019172169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:57 compute-0 systemd[1]: Started Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:07:57 compute-0 ceph-mon[74447]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:07:57 compute-0 ceph-mon[74447]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 01 13:07:57 compute-0 ceph-mon[74447]: pidfile_write: ignore empty --pid-file
Oct 01 13:07:57 compute-0 ceph-mon[74447]: load: jerasure load: lrc 
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: RocksDB version: 7.9.2
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Git sha 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: DB SUMMARY
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: DB Session ID:  CA7YKDRE0VP79L6Q3AHS
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: CURRENT file:  CURRENT
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                         Options.error_if_exists: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                       Options.create_if_missing: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                                     Options.env: 0x555a17577c40
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                                Options.info_log: 0x555a189c2e80
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                              Options.statistics: (nil)
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                               Options.use_fsync: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                              Options.db_log_dir: 
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                                 Options.wal_dir: 
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                    Options.write_buffer_manager: 0x555a189d2b40
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.unordered_write: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                               Options.row_cache: None
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                              Options.wal_filter: None
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.two_write_queues: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.wal_compression: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.atomic_flush: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.max_background_jobs: 2
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.max_background_compactions: -1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.max_subcompactions: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.max_total_wal_size: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                          Options.max_open_files: -1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:       Options.compaction_readahead_size: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Compression algorithms supported:
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         kZSTD supported: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         kXpressCompression supported: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         kBZip2Compression supported: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         kLZ4Compression supported: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         kZlibCompression supported: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         kSnappyCompression supported: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:           Options.merge_operator: 
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:        Options.compaction_filter: None
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555a189c2a80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x555a189bb1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:        Options.write_buffer_size: 33554432
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:  Options.max_write_buffer_number: 2
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:          Options.compression: NoCompression
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.num_levels: 7
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324077272303, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324077273977, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "CA7YKDRE0VP79L6Q3AHS", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324077274109, "job": 1, "event": "recovery_finished"}
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x555a189e4e00
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: DB pointer 0x555a18a6e000
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:07:57 compute-0 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555a189bb1f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 13:07:57 compute-0 ceph-mon[74447]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@-1(???) e0 preinit fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(probing) e0 win_standalone_election
Oct 01 13:07:57 compute-0 ceph-mon[74447]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 13:07:57 compute-0 ceph-mon[74447]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 01 13:07:57 compute-0 podman[74448]: 2025-10-01 13:07:57.302641988 +0000 UTC m=+0.056015818 container create ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 01 13:07:57 compute-0 ceph-mon[74447]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 13:07:57 compute-0 ceph-mon[74447]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 13:07:57 compute-0 ceph-mon[74447]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-01T13:07:55.606214Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,os=Linux}
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).mds e1 new map
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct 01 13:07:57 compute-0 ceph-mon[74447]: log_channel(cluster) log [DBG] : fsmap 
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mkfs eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct 01 13:07:57 compute-0 ceph-mon[74447]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 01 13:07:57 compute-0 ceph-mon[74447]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 13:07:57 compute-0 systemd[1]: Started libpod-conmon-ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979.scope.
Oct 01 13:07:57 compute-0 podman[74448]: 2025-10-01 13:07:57.279126442 +0000 UTC m=+0.032500352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de08cd6a6b388f86b0540f71bd401b67f4c91ce29ed0ff6bae3855b9fb6596d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de08cd6a6b388f86b0540f71bd401b67f4c91ce29ed0ff6bae3855b9fb6596d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de08cd6a6b388f86b0540f71bd401b67f4c91ce29ed0ff6bae3855b9fb6596d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:57 compute-0 podman[74448]: 2025-10-01 13:07:57.411320487 +0000 UTC m=+0.164694367 container init ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:07:57 compute-0 podman[74448]: 2025-10-01 13:07:57.417543814 +0000 UTC m=+0.170917664 container start ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:07:57 compute-0 podman[74448]: 2025-10-01 13:07:57.420799637 +0000 UTC m=+0.174173517 container attach ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 13:07:57 compute-0 ceph-mon[74447]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 01 13:07:57 compute-0 ceph-mon[74447]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846297465' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:   cluster:
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:     id:     eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:     health: HEALTH_OK
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:  
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:   services:
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:     mon: 1 daemons, quorum compute-0 (age 0.477885s)
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:     mgr: no daemons active
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:     osd: 0 osds: 0 up, 0 in
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:  
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:   data:
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:     pools:   0 pools, 0 pgs
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:     objects: 0 objects, 0 B
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:     usage:   0 B used, 0 B / 0 B avail
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:     pgs:     
Oct 01 13:07:57 compute-0 lucid_ardinghelli[74503]:  
Oct 01 13:07:57 compute-0 systemd[1]: libpod-ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979.scope: Deactivated successfully.
Oct 01 13:07:57 compute-0 podman[74448]: 2025-10-01 13:07:57.806916548 +0000 UTC m=+0.560290408 container died ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:07:57 compute-0 podman[74448]: 2025-10-01 13:07:57.847997052 +0000 UTC m=+0.601370882 container remove ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979 (image=quay.io/ceph/ceph:v18, name=lucid_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:07:57 compute-0 systemd[1]: libpod-conmon-ce32dc493adb23f1e3969acc90148188356ed703ada0adada5b0760dc8da6979.scope: Deactivated successfully.
Oct 01 13:07:57 compute-0 podman[74540]: 2025-10-01 13:07:57.944144502 +0000 UTC m=+0.066442419 container create 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:07:57 compute-0 systemd[1]: Started libpod-conmon-88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3.scope.
Oct 01 13:07:58 compute-0 podman[74540]: 2025-10-01 13:07:57.908197031 +0000 UTC m=+0.030494999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:58 compute-0 podman[74540]: 2025-10-01 13:07:58.026542527 +0000 UTC m=+0.148840424 container init 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:07:58 compute-0 podman[74540]: 2025-10-01 13:07:58.037510255 +0000 UTC m=+0.159808122 container start 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:07:58 compute-0 podman[74540]: 2025-10-01 13:07:58.040692966 +0000 UTC m=+0.162990873 container attach 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 13:07:58 compute-0 ceph-mon[74447]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 13:07:58 compute-0 ceph-mon[74447]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 01 13:07:58 compute-0 ceph-mon[74447]: fsmap 
Oct 01 13:07:58 compute-0 ceph-mon[74447]: osdmap e1: 0 total, 0 up, 0 in
Oct 01 13:07:58 compute-0 ceph-mon[74447]: mgrmap e1: no daemons active
Oct 01 13:07:58 compute-0 ceph-mon[74447]: from='client.? 192.168.122.100:0/2846297465' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 13:07:58 compute-0 ceph-mon[74447]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 01 13:07:58 compute-0 ceph-mon[74447]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772669610' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 13:07:58 compute-0 ceph-mon[74447]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/772669610' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 01 13:07:58 compute-0 focused_ishizaka[74556]: 
Oct 01 13:07:58 compute-0 focused_ishizaka[74556]: [global]
Oct 01 13:07:58 compute-0 focused_ishizaka[74556]:         fsid = eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:07:58 compute-0 focused_ishizaka[74556]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct 01 13:07:58 compute-0 focused_ishizaka[74556]:         osd_crush_chooseleaf_type = 0
Oct 01 13:07:58 compute-0 systemd[1]: libpod-88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3.scope: Deactivated successfully.
Oct 01 13:07:58 compute-0 podman[74540]: 2025-10-01 13:07:58.420912419 +0000 UTC m=+0.543210306 container died 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-97ee78925199ea3ada65a08da2fddabff236af049fea2ee51e67c918eb0549fd-merged.mount: Deactivated successfully.
Oct 01 13:07:58 compute-0 podman[74540]: 2025-10-01 13:07:58.45970411 +0000 UTC m=+0.582001997 container remove 88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3 (image=quay.io/ceph/ceph:v18, name=focused_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:07:58 compute-0 systemd[1]: libpod-conmon-88c80767a748ac431297c46d76decc1982010521295a24ed8fb2c37fd99603f3.scope: Deactivated successfully.
Oct 01 13:07:58 compute-0 podman[74594]: 2025-10-01 13:07:58.530350482 +0000 UTC m=+0.049771900 container create 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:07:58 compute-0 systemd[1]: Started libpod-conmon-9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8.scope.
Oct 01 13:07:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:07:58 compute-0 podman[74594]: 2025-10-01 13:07:58.502846729 +0000 UTC m=+0.022268197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:07:58 compute-0 podman[74594]: 2025-10-01 13:07:58.605712803 +0000 UTC m=+0.125134201 container init 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:07:58 compute-0 podman[74594]: 2025-10-01 13:07:58.614685137 +0000 UTC m=+0.134106525 container start 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:07:58 compute-0 podman[74594]: 2025-10-01 13:07:58.617499197 +0000 UTC m=+0.136920585 container attach 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:07:58 compute-0 sshd-session[74438]: Received disconnect from 27.254.137.144 port 43720:11: Bye Bye [preauth]
Oct 01 13:07:58 compute-0 sshd-session[74438]: Disconnected from authenticating user root 27.254.137.144 port 43720 [preauth]
Oct 01 13:07:59 compute-0 ceph-mon[74447]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:07:59 compute-0 ceph-mon[74447]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416105299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:07:59 compute-0 systemd[1]: libpod-9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8.scope: Deactivated successfully.
Oct 01 13:07:59 compute-0 podman[74594]: 2025-10-01 13:07:59.040453146 +0000 UTC m=+0.559874574 container died 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-741ff91c042e7b2ca5432006091ef442c9a7ba74870882062fd26864badcfb2d-merged.mount: Deactivated successfully.
Oct 01 13:07:59 compute-0 podman[74594]: 2025-10-01 13:07:59.083850893 +0000 UTC m=+0.603272281 container remove 9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8 (image=quay.io/ceph/ceph:v18, name=determined_albattani, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 13:07:59 compute-0 systemd[1]: libpod-conmon-9d4bd0cb2a381d32051e4256ca6117a79627135f0e105a3a5e2acf9b56a3d5e8.scope: Deactivated successfully.
Oct 01 13:07:59 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:07:59 compute-0 ceph-mon[74447]: from='client.? 192.168.122.100:0/772669610' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 13:07:59 compute-0 ceph-mon[74447]: from='client.? 192.168.122.100:0/772669610' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 01 13:07:59 compute-0 ceph-mon[74447]: from='client.? 192.168.122.100:0/416105299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:07:59 compute-0 ceph-mon[74447]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 01 13:07:59 compute-0 ceph-mon[74447]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 01 13:07:59 compute-0 ceph-mon[74447]: mon.compute-0@0(leader) e1 shutdown
Oct 01 13:07:59 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0[74443]: 2025-10-01T13:07:59.539+0000 7f3a242dc640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct 01 13:07:59 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0[74443]: 2025-10-01T13:07:59.539+0000 7f3a242dc640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct 01 13:07:59 compute-0 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 01 13:07:59 compute-0 ceph-mon[74447]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 01 13:07:59 compute-0 podman[74679]: 2025-10-01 13:07:59.641370373 +0000 UTC m=+0.365782297 container died c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c234bc0c11c5faf1ddbed49676c1825f724dde31e435508583b482730a5ba3d6-merged.mount: Deactivated successfully.
Oct 01 13:07:59 compute-0 podman[74679]: 2025-10-01 13:07:59.747480289 +0000 UTC m=+0.471892233 container remove c0f6eefbaf6d3e14c44a3af400bdf6a821a561d9919fa9a360c56f9ade45b008 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:07:59 compute-0 bash[74679]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0
Oct 01 13:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:07:59 compute-0 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mon.compute-0.service: Deactivated successfully.
Oct 01 13:07:59 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:07:59 compute-0 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mon.compute-0.service: Consumed 1.012s CPU time.
Oct 01 13:07:59 compute-0 systemd[1]: Starting Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 01 13:08:00 compute-0 podman[74782]: 2025-10-01 13:08:00.175012593 +0000 UTC m=+0.047087985 container create dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 13:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc7f86342d976758df2b1298b55e54b95dbf922a72e6063d13a6c43e749dc6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc7f86342d976758df2b1298b55e54b95dbf922a72e6063d13a6c43e749dc6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc7f86342d976758df2b1298b55e54b95dbf922a72e6063d13a6c43e749dc6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc7f86342d976758df2b1298b55e54b95dbf922a72e6063d13a6c43e749dc6b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:00 compute-0 podman[74782]: 2025-10-01 13:08:00.148489992 +0000 UTC m=+0.020565374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:00 compute-0 podman[74782]: 2025-10-01 13:08:00.255852729 +0000 UTC m=+0.127928131 container init dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:08:00 compute-0 podman[74782]: 2025-10-01 13:08:00.260840097 +0000 UTC m=+0.132915469 container start dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 13:08:00 compute-0 bash[74782]: dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320
Oct 01 13:08:00 compute-0 systemd[1]: Started Ceph mon.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:08:00 compute-0 ceph-mon[74802]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:08:00 compute-0 ceph-mon[74802]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 01 13:08:00 compute-0 ceph-mon[74802]: pidfile_write: ignore empty --pid-file
Oct 01 13:08:00 compute-0 ceph-mon[74802]: load: jerasure load: lrc 
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: RocksDB version: 7.9.2
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Git sha 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: DB SUMMARY
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: DB Session ID:  NJZTWL88H5HSB4Q4NEC9
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: CURRENT file:  CURRENT
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55668 ; 
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                         Options.error_if_exists: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                       Options.create_if_missing: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                                     Options.env: 0x55daa30d0c40
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                                Options.info_log: 0x55daa554b040
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                              Options.statistics: (nil)
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                               Options.use_fsync: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                              Options.db_log_dir: 
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                                 Options.wal_dir: 
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                    Options.write_buffer_manager: 0x55daa555ab40
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.unordered_write: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                               Options.row_cache: None
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                              Options.wal_filter: None
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.two_write_queues: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.wal_compression: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.atomic_flush: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.max_background_jobs: 2
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.max_background_compactions: -1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.max_subcompactions: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.max_total_wal_size: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                          Options.max_open_files: -1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:       Options.compaction_readahead_size: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Compression algorithms supported:
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         kZSTD supported: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         kXpressCompression supported: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         kBZip2Compression supported: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         kLZ4Compression supported: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         kZlibCompression supported: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         kSnappyCompression supported: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:           Options.merge_operator: 
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:        Options.compaction_filter: None
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daa554ac40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55daa55431f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:        Options.write_buffer_size: 33554432
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:  Options.max_write_buffer_number: 2
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:          Options.compression: NoCompression
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.num_levels: 7
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324080294585, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324080306270, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53789, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51378, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324080, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324080306365, "job": 1, "event": "recovery_finished"}
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55daa556ce00
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: DB pointer 0x55daa55f6000
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.85 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      2/0   55.85 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 1.44 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 1.44 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 512.00 MB usage: 25.89 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,25.11 KB,0.00478923%) FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 13:08:00 compute-0 ceph-mon[74802]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@-1(???) e1 preinit fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@-1(???).mds e1 new map
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@0(probing) e1 win_standalone_election
Oct 01 13:08:00 compute-0 ceph-mon[74802]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 13:08:00 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 13:08:00 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 01 13:08:00 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap 
Oct 01 13:08:00 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct 01 13:08:00 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct 01 13:08:00 compute-0 podman[74803]: 2025-10-01 13:08:00.355768629 +0000 UTC m=+0.056040389 container create e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct 01 13:08:00 compute-0 ceph-mon[74802]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 01 13:08:00 compute-0 ceph-mon[74802]: fsmap 
Oct 01 13:08:00 compute-0 ceph-mon[74802]: osdmap e1: 0 total, 0 up, 0 in
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mgrmap e1: no daemons active
Oct 01 13:08:00 compute-0 systemd[1]: Started libpod-conmon-e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511.scope.
Oct 01 13:08:00 compute-0 podman[74803]: 2025-10-01 13:08:00.326840341 +0000 UTC m=+0.027112171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434123d9f0ea530af3ede4cf6b664d15441d268877432e11c8ea1a799be4d868/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434123d9f0ea530af3ede4cf6b664d15441d268877432e11c8ea1a799be4d868/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434123d9f0ea530af3ede4cf6b664d15441d268877432e11c8ea1a799be4d868/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:00 compute-0 podman[74803]: 2025-10-01 13:08:00.482409877 +0000 UTC m=+0.182681627 container init e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:08:00 compute-0 podman[74803]: 2025-10-01 13:08:00.492663612 +0000 UTC m=+0.192935372 container start e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:00 compute-0 podman[74803]: 2025-10-01 13:08:00.511864331 +0000 UTC m=+0.212136101 container attach e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:08:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct 01 13:08:00 compute-0 systemd[1]: libpod-e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511.scope: Deactivated successfully.
Oct 01 13:08:00 compute-0 podman[74803]: 2025-10-01 13:08:00.950376254 +0000 UTC m=+0.650647994 container died e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:08:01 compute-0 podman[74803]: 2025-10-01 13:08:01.033980587 +0000 UTC m=+0.734252317 container remove e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511 (image=quay.io/ceph/ceph:v18, name=youthful_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:08:01 compute-0 systemd[1]: libpod-conmon-e0e728c5f547f0869b625f6f882c799b9d1691bfd5424bf3c2a878d7f1681511.scope: Deactivated successfully.
Oct 01 13:08:01 compute-0 podman[74898]: 2025-10-01 13:08:01.110428412 +0000 UTC m=+0.055605375 container create de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:08:01 compute-0 systemd[1]: Started libpod-conmon-de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff.scope.
Oct 01 13:08:01 compute-0 podman[74898]: 2025-10-01 13:08:01.083930962 +0000 UTC m=+0.029108005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a6cedc22206bc745229276f2014401093546f5140c0ad6c7f26d3b7732038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a6cedc22206bc745229276f2014401093546f5140c0ad6c7f26d3b7732038/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0a6cedc22206bc745229276f2014401093546f5140c0ad6c7f26d3b7732038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:01 compute-0 podman[74898]: 2025-10-01 13:08:01.202417771 +0000 UTC m=+0.147594754 container init de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:08:01 compute-0 podman[74898]: 2025-10-01 13:08:01.212848733 +0000 UTC m=+0.158025696 container start de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:08:01 compute-0 podman[74898]: 2025-10-01 13:08:01.216301201 +0000 UTC m=+0.161478174 container attach de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:08:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct 01 13:08:01 compute-0 systemd[1]: libpod-de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff.scope: Deactivated successfully.
Oct 01 13:08:01 compute-0 podman[74940]: 2025-10-01 13:08:01.663756989 +0000 UTC m=+0.022708211 container died de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e0a6cedc22206bc745229276f2014401093546f5140c0ad6c7f26d3b7732038-merged.mount: Deactivated successfully.
Oct 01 13:08:01 compute-0 podman[74940]: 2025-10-01 13:08:01.697909942 +0000 UTC m=+0.056861144 container remove de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:01 compute-0 systemd[1]: libpod-conmon-de977fc2649baa14eb86cd9da333624a29372a7492ebf397b181321e68a3d3ff.scope: Deactivated successfully.
Oct 01 13:08:01 compute-0 systemd[1]: Reloading.
Oct 01 13:08:01 compute-0 systemd-rc-local-generator[74982]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:08:01 compute-0 systemd-sysv-generator[74987]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:08:01 compute-0 sshd-session[73916]: error: kex_exchange_identification: read: Connection timed out
Oct 01 13:08:01 compute-0 sshd-session[73916]: banner exchange: Connection from 202.103.55.158 port 41280: Connection timed out
Oct 01 13:08:02 compute-0 systemd[1]: Reloading.
Oct 01 13:08:02 compute-0 systemd-rc-local-generator[75026]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:08:02 compute-0 systemd-sysv-generator[75030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:08:02 compute-0 systemd[1]: Starting Ceph mgr.compute-0.puxjpb for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:08:02 compute-0 podman[75083]: 2025-10-01 13:08:02.588491629 +0000 UTC m=+0.056814053 container create d581f7f0a3e63ca8603611784f26da5ea3157b3a16113cc88b43162dcd3c9163 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d0de1d4e81cdfa5f06315b3236867484dace739878f00b9f28e5f9862aba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d0de1d4e81cdfa5f06315b3236867484dace739878f00b9f28e5f9862aba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d0de1d4e81cdfa5f06315b3236867484dace739878f00b9f28e5f9862aba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d0de1d4e81cdfa5f06315b3236867484dace739878f00b9f28e5f9862aba8/merged/var/lib/ceph/mgr/ceph-compute-0.puxjpb supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:02 compute-0 podman[75083]: 2025-10-01 13:08:02.656072413 +0000 UTC m=+0.124394877 container init d581f7f0a3e63ca8603611784f26da5ea3157b3a16113cc88b43162dcd3c9163 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:02 compute-0 podman[75083]: 2025-10-01 13:08:02.568640989 +0000 UTC m=+0.036963463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:02 compute-0 podman[75083]: 2025-10-01 13:08:02.667916769 +0000 UTC m=+0.136239213 container start d581f7f0a3e63ca8603611784f26da5ea3157b3a16113cc88b43162dcd3c9163 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:08:02 compute-0 bash[75083]: d581f7f0a3e63ca8603611784f26da5ea3157b3a16113cc88b43162dcd3c9163
Oct 01 13:08:02 compute-0 systemd[1]: Started Ceph mgr.compute-0.puxjpb for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:08:02 compute-0 ceph-mgr[75103]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:08:02 compute-0 ceph-mgr[75103]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 01 13:08:02 compute-0 ceph-mgr[75103]: pidfile_write: ignore empty --pid-file
Oct 01 13:08:02 compute-0 podman[75104]: 2025-10-01 13:08:02.757483951 +0000 UTC m=+0.049302825 container create 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:02 compute-0 systemd[1]: Started libpod-conmon-63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37.scope.
Oct 01 13:08:02 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'alerts'
Oct 01 13:08:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:02 compute-0 podman[75104]: 2025-10-01 13:08:02.731896709 +0000 UTC m=+0.023715593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8319fda18d1ec69bdbf12e196eefdeb2f9d114b1238baaadff17f9207f88730b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8319fda18d1ec69bdbf12e196eefdeb2f9d114b1238baaadff17f9207f88730b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8319fda18d1ec69bdbf12e196eefdeb2f9d114b1238baaadff17f9207f88730b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:02 compute-0 podman[75104]: 2025-10-01 13:08:02.859427296 +0000 UTC m=+0.151246210 container init 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:08:02 compute-0 podman[75104]: 2025-10-01 13:08:02.867183052 +0000 UTC m=+0.159001896 container start 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:08:02 compute-0 podman[75104]: 2025-10-01 13:08:02.875125763 +0000 UTC m=+0.166944637 container attach 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:08:03 compute-0 ceph-mgr[75103]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 13:08:03 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'balancer'
Oct 01 13:08:03 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:03.118+0000 7f0e0936f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 13:08:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 13:08:03 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042162465' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:03 compute-0 condescending_allen[75144]: 
Oct 01 13:08:03 compute-0 condescending_allen[75144]: {
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "health": {
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "status": "HEALTH_OK",
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "checks": {},
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "mutes": []
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     },
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "election_epoch": 5,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "quorum": [
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         0
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     ],
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "quorum_names": [
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "compute-0"
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     ],
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "quorum_age": 2,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "monmap": {
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "epoch": 1,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "min_mon_release_name": "reef",
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "num_mons": 1
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     },
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "osdmap": {
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "epoch": 1,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "num_osds": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "num_up_osds": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "osd_up_since": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "num_in_osds": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "osd_in_since": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "num_remapped_pgs": 0
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     },
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "pgmap": {
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "pgs_by_state": [],
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "num_pgs": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "num_pools": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "num_objects": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "data_bytes": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "bytes_used": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "bytes_avail": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "bytes_total": 0
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     },
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "fsmap": {
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "epoch": 1,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "by_rank": [],
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "up:standby": 0
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     },
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "mgrmap": {
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "available": false,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "num_standbys": 0,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "modules": [
Oct 01 13:08:03 compute-0 condescending_allen[75144]:             "iostat",
Oct 01 13:08:03 compute-0 condescending_allen[75144]:             "nfs",
Oct 01 13:08:03 compute-0 condescending_allen[75144]:             "restful"
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         ],
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "services": {}
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     },
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "servicemap": {
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "epoch": 1,
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "modified": "2025-10-01T13:07:57.318832+0000",
Oct 01 13:08:03 compute-0 condescending_allen[75144]:         "services": {}
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     },
Oct 01 13:08:03 compute-0 condescending_allen[75144]:     "progress_events": {}
Oct 01 13:08:03 compute-0 condescending_allen[75144]: }
Oct 01 13:08:03 compute-0 systemd[1]: libpod-63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37.scope: Deactivated successfully.
Oct 01 13:08:03 compute-0 podman[75104]: 2025-10-01 13:08:03.27590747 +0000 UTC m=+0.567726294 container died 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 13:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8319fda18d1ec69bdbf12e196eefdeb2f9d114b1238baaadff17f9207f88730b-merged.mount: Deactivated successfully.
Oct 01 13:08:03 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3042162465' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:03 compute-0 ceph-mgr[75103]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 13:08:03 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:03.362+0000 7f0e0936f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 13:08:03 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'cephadm'
Oct 01 13:08:03 compute-0 podman[75104]: 2025-10-01 13:08:03.384719662 +0000 UTC m=+0.676538526 container remove 63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37 (image=quay.io/ceph/ceph:v18, name=condescending_allen, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:08:03 compute-0 systemd[1]: libpod-conmon-63ba9eb78dba1110a8f61ddc27ac9ca7168991d577e66fe6a69bb0326974ac37.scope: Deactivated successfully.
Oct 01 13:08:05 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'crash'
Oct 01 13:08:05 compute-0 podman[75194]: 2025-10-01 13:08:05.460922816 +0000 UTC m=+0.047041873 container create 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 13:08:05 compute-0 systemd[1]: Started libpod-conmon-01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49.scope.
Oct 01 13:08:05 compute-0 ceph-mgr[75103]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 13:08:05 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'dashboard'
Oct 01 13:08:05 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:05.519+0000 7f0e0936f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 13:08:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb022b3323a0de39516fa5ec775d9b230d42d8bbb1e7dd9731fc034682c44db8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb022b3323a0de39516fa5ec775d9b230d42d8bbb1e7dd9731fc034682c44db8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb022b3323a0de39516fa5ec775d9b230d42d8bbb1e7dd9731fc034682c44db8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:05 compute-0 podman[75194]: 2025-10-01 13:08:05.44337423 +0000 UTC m=+0.029493297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:05 compute-0 podman[75194]: 2025-10-01 13:08:05.541544374 +0000 UTC m=+0.127663441 container init 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:08:05 compute-0 podman[75194]: 2025-10-01 13:08:05.55244554 +0000 UTC m=+0.138564587 container start 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:08:05 compute-0 podman[75194]: 2025-10-01 13:08:05.557152059 +0000 UTC m=+0.143271106 container attach 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 13:08:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 13:08:05 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181456265' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]: 
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]: {
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "health": {
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "status": "HEALTH_OK",
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "checks": {},
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "mutes": []
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     },
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "election_epoch": 5,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "quorum": [
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         0
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     ],
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "quorum_names": [
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "compute-0"
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     ],
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "quorum_age": 5,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "monmap": {
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "epoch": 1,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "min_mon_release_name": "reef",
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "num_mons": 1
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     },
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "osdmap": {
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "epoch": 1,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "num_osds": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "num_up_osds": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "osd_up_since": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "num_in_osds": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "osd_in_since": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "num_remapped_pgs": 0
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     },
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "pgmap": {
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "pgs_by_state": [],
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "num_pgs": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "num_pools": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "num_objects": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "data_bytes": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "bytes_used": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "bytes_avail": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "bytes_total": 0
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     },
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "fsmap": {
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "epoch": 1,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "by_rank": [],
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "up:standby": 0
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     },
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "mgrmap": {
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "available": false,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "num_standbys": 0,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "modules": [
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:             "iostat",
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:             "nfs",
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:             "restful"
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         ],
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "services": {}
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     },
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "servicemap": {
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "epoch": 1,
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "modified": "2025-10-01T13:07:57.318832+0000",
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:         "services": {}
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     },
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]:     "progress_events": {}
Oct 01 13:08:05 compute-0 dreamy_mayer[75211]: }
Oct 01 13:08:05 compute-0 systemd[1]: libpod-01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49.scope: Deactivated successfully.
Oct 01 13:08:05 compute-0 podman[75194]: 2025-10-01 13:08:05.944489499 +0000 UTC m=+0.530608536 container died 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb022b3323a0de39516fa5ec775d9b230d42d8bbb1e7dd9731fc034682c44db8-merged.mount: Deactivated successfully.
Oct 01 13:08:05 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/181456265' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:05 compute-0 podman[75194]: 2025-10-01 13:08:05.990143707 +0000 UTC m=+0.576262764 container remove 01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49 (image=quay.io/ceph/ceph:v18, name=dreamy_mayer, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:08:06 compute-0 systemd[1]: libpod-conmon-01f3a7b04f0ffee1ce3ddb017b2f42f7e4ba16c1f0fa526ddaab994626639e49.scope: Deactivated successfully.
Oct 01 13:08:06 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'devicehealth'
Oct 01 13:08:07 compute-0 ceph-mgr[75103]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 13:08:07 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:07.134+0000 7f0e0936f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 13:08:07 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'diskprediction_local'
Oct 01 13:08:07 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 01 13:08:07 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 01 13:08:07 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]:   from numpy import show_config as show_numpy_config
Oct 01 13:08:07 compute-0 ceph-mgr[75103]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 01 13:08:07 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:07.629+0000 7f0e0936f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 01 13:08:07 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'influx'
Oct 01 13:08:07 compute-0 ceph-mgr[75103]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 01 13:08:07 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:07.857+0000 7f0e0936f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 01 13:08:07 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'insights'
Oct 01 13:08:08 compute-0 podman[75249]: 2025-10-01 13:08:08.055560279 +0000 UTC m=+0.043116888 container create d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:08:08 compute-0 systemd[1]: Started libpod-conmon-d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196.scope.
Oct 01 13:08:08 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'iostat'
Oct 01 13:08:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935c758e3efc223a2b707f72f9b32f8cf2e92c7fbd843493d59dbea7abb46bb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935c758e3efc223a2b707f72f9b32f8cf2e92c7fbd843493d59dbea7abb46bb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935c758e3efc223a2b707f72f9b32f8cf2e92c7fbd843493d59dbea7abb46bb2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:08 compute-0 podman[75249]: 2025-10-01 13:08:08.121031147 +0000 UTC m=+0.108587816 container init d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:08 compute-0 podman[75249]: 2025-10-01 13:08:08.126706367 +0000 UTC m=+0.114263006 container start d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:08 compute-0 podman[75249]: 2025-10-01 13:08:08.034568793 +0000 UTC m=+0.022125412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:08 compute-0 podman[75249]: 2025-10-01 13:08:08.130688374 +0000 UTC m=+0.118245083 container attach d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:08:08 compute-0 ceph-mgr[75103]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 01 13:08:08 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:08.331+0000 7f0e0936f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 01 13:08:08 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'k8sevents'
Oct 01 13:08:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 13:08:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524194599' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:08 compute-0 boring_tu[75265]: 
Oct 01 13:08:08 compute-0 boring_tu[75265]: {
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "health": {
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "status": "HEALTH_OK",
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "checks": {},
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "mutes": []
Oct 01 13:08:08 compute-0 boring_tu[75265]:     },
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "election_epoch": 5,
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "quorum": [
Oct 01 13:08:08 compute-0 boring_tu[75265]:         0
Oct 01 13:08:08 compute-0 boring_tu[75265]:     ],
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "quorum_names": [
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "compute-0"
Oct 01 13:08:08 compute-0 boring_tu[75265]:     ],
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "quorum_age": 8,
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "monmap": {
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "epoch": 1,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "min_mon_release_name": "reef",
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "num_mons": 1
Oct 01 13:08:08 compute-0 boring_tu[75265]:     },
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "osdmap": {
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "epoch": 1,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "num_osds": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "num_up_osds": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "osd_up_since": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "num_in_osds": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "osd_in_since": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "num_remapped_pgs": 0
Oct 01 13:08:08 compute-0 boring_tu[75265]:     },
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "pgmap": {
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "pgs_by_state": [],
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "num_pgs": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "num_pools": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "num_objects": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "data_bytes": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "bytes_used": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "bytes_avail": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "bytes_total": 0
Oct 01 13:08:08 compute-0 boring_tu[75265]:     },
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "fsmap": {
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "epoch": 1,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "by_rank": [],
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "up:standby": 0
Oct 01 13:08:08 compute-0 boring_tu[75265]:     },
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "mgrmap": {
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "available": false,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "num_standbys": 0,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "modules": [
Oct 01 13:08:08 compute-0 boring_tu[75265]:             "iostat",
Oct 01 13:08:08 compute-0 boring_tu[75265]:             "nfs",
Oct 01 13:08:08 compute-0 boring_tu[75265]:             "restful"
Oct 01 13:08:08 compute-0 boring_tu[75265]:         ],
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "services": {}
Oct 01 13:08:08 compute-0 boring_tu[75265]:     },
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "servicemap": {
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "epoch": 1,
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "modified": "2025-10-01T13:07:57.318832+0000",
Oct 01 13:08:08 compute-0 boring_tu[75265]:         "services": {}
Oct 01 13:08:08 compute-0 boring_tu[75265]:     },
Oct 01 13:08:08 compute-0 boring_tu[75265]:     "progress_events": {}
Oct 01 13:08:08 compute-0 boring_tu[75265]: }
Oct 01 13:08:08 compute-0 systemd[1]: libpod-d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196.scope: Deactivated successfully.
Oct 01 13:08:08 compute-0 podman[75249]: 2025-10-01 13:08:08.511286409 +0000 UTC m=+0.498843018 container died d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 13:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-935c758e3efc223a2b707f72f9b32f8cf2e92c7fbd843493d59dbea7abb46bb2-merged.mount: Deactivated successfully.
Oct 01 13:08:08 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/524194599' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:08 compute-0 podman[75249]: 2025-10-01 13:08:08.570682073 +0000 UTC m=+0.558238712 container remove d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196 (image=quay.io/ceph/ceph:v18, name=boring_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:08:08 compute-0 systemd[1]: libpod-conmon-d09a189191ac01fb712d1aeeb3dbe34c0cf1c034bf7a44d79e4c79ea1178f196.scope: Deactivated successfully.
Oct 01 13:08:10 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'localpool'
Oct 01 13:08:10 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'mds_autoscaler'
Oct 01 13:08:10 compute-0 podman[75303]: 2025-10-01 13:08:10.722712854 +0000 UTC m=+0.116177728 container create 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:08:10 compute-0 podman[75303]: 2025-10-01 13:08:10.645916937 +0000 UTC m=+0.039381851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:11 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'mirroring'
Oct 01 13:08:11 compute-0 systemd[1]: Started libpod-conmon-908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419.scope.
Oct 01 13:08:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901002afb072df1a16767e010202920aaf70a067c0cc04a4ecc060a800b049e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901002afb072df1a16767e010202920aaf70a067c0cc04a4ecc060a800b049e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901002afb072df1a16767e010202920aaf70a067c0cc04a4ecc060a800b049e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:11 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'nfs'
Oct 01 13:08:11 compute-0 podman[75303]: 2025-10-01 13:08:11.637630092 +0000 UTC m=+1.031095026 container init 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:08:11 compute-0 podman[75303]: 2025-10-01 13:08:11.643612472 +0000 UTC m=+1.037077346 container start 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:08:11 compute-0 podman[75303]: 2025-10-01 13:08:11.64736826 +0000 UTC m=+1.040833204 container attach 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 13:08:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 13:08:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3322485282' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:11 compute-0 agitated_carver[75319]: 
Oct 01 13:08:11 compute-0 agitated_carver[75319]: {
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "health": {
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "status": "HEALTH_OK",
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "checks": {},
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "mutes": []
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     },
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "election_epoch": 5,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "quorum": [
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         0
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     ],
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "quorum_names": [
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "compute-0"
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     ],
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "quorum_age": 11,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "monmap": {
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "epoch": 1,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "min_mon_release_name": "reef",
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "num_mons": 1
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     },
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "osdmap": {
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "epoch": 1,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "num_osds": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "num_up_osds": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "osd_up_since": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "num_in_osds": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "osd_in_since": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "num_remapped_pgs": 0
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     },
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "pgmap": {
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "pgs_by_state": [],
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "num_pgs": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "num_pools": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "num_objects": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "data_bytes": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "bytes_used": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "bytes_avail": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "bytes_total": 0
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     },
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "fsmap": {
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "epoch": 1,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "by_rank": [],
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "up:standby": 0
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     },
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "mgrmap": {
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "available": false,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "num_standbys": 0,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "modules": [
Oct 01 13:08:11 compute-0 agitated_carver[75319]:             "iostat",
Oct 01 13:08:11 compute-0 agitated_carver[75319]:             "nfs",
Oct 01 13:08:11 compute-0 agitated_carver[75319]:             "restful"
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         ],
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "services": {}
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     },
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "servicemap": {
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "epoch": 1,
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "modified": "2025-10-01T13:07:57.318832+0000",
Oct 01 13:08:11 compute-0 agitated_carver[75319]:         "services": {}
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     },
Oct 01 13:08:11 compute-0 agitated_carver[75319]:     "progress_events": {}
Oct 01 13:08:11 compute-0 agitated_carver[75319]: }
Oct 01 13:08:12 compute-0 systemd[1]: libpod-908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419.scope: Deactivated successfully.
Oct 01 13:08:12 compute-0 podman[75303]: 2025-10-01 13:08:12.005605718 +0000 UTC m=+1.399070622 container died 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-901002afb072df1a16767e010202920aaf70a067c0cc04a4ecc060a800b049e1-merged.mount: Deactivated successfully.
Oct 01 13:08:12 compute-0 podman[75303]: 2025-10-01 13:08:12.044643166 +0000 UTC m=+1.438108040 container remove 908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419 (image=quay.io/ceph/ceph:v18, name=agitated_carver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:08:12 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3322485282' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:12 compute-0 systemd[1]: libpod-conmon-908406a5a6b97df0c9cda17fdef2f9bc4bc05226ee4abff840e9ad56d7d75419.scope: Deactivated successfully.
Oct 01 13:08:12 compute-0 ceph-mgr[75103]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 01 13:08:12 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'orchestrator'
Oct 01 13:08:12 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:12.088+0000 7f0e0936f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 01 13:08:12 compute-0 ceph-mgr[75103]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 01 13:08:12 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'osd_perf_query'
Oct 01 13:08:12 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:12.701+0000 7f0e0936f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 01 13:08:12 compute-0 ceph-mgr[75103]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 01 13:08:12 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'osd_support'
Oct 01 13:08:12 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:12.948+0000 7f0e0936f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 01 13:08:13 compute-0 ceph-mgr[75103]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 01 13:08:13 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'pg_autoscaler'
Oct 01 13:08:13 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:13.169+0000 7f0e0936f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 01 13:08:13 compute-0 ceph-mgr[75103]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 01 13:08:13 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'progress'
Oct 01 13:08:13 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:13.421+0000 7f0e0936f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 01 13:08:13 compute-0 ceph-mgr[75103]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 01 13:08:13 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'prometheus'
Oct 01 13:08:13 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:13.640+0000 7f0e0936f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 01 13:08:14 compute-0 podman[75359]: 2025-10-01 13:08:14.131524559 +0000 UTC m=+0.057805565 container create 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:08:14 compute-0 systemd[1]: Started libpod-conmon-12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b.scope.
Oct 01 13:08:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74a4c30b5bcb8e630f39318936639432c966c289c0970a8eb50fe1a988dcc1e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74a4c30b5bcb8e630f39318936639432c966c289c0970a8eb50fe1a988dcc1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c74a4c30b5bcb8e630f39318936639432c966c289c0970a8eb50fe1a988dcc1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:14 compute-0 podman[75359]: 2025-10-01 13:08:14.111743131 +0000 UTC m=+0.038024157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:14 compute-0 podman[75359]: 2025-10-01 13:08:14.211848917 +0000 UTC m=+0.138130013 container init 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:08:14 compute-0 podman[75359]: 2025-10-01 13:08:14.217770185 +0000 UTC m=+0.144051211 container start 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:08:14 compute-0 podman[75359]: 2025-10-01 13:08:14.221885906 +0000 UTC m=+0.148166912 container attach 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 13:08:14 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860770698' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]: 
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]: {
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "health": {
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "status": "HEALTH_OK",
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "checks": {},
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "mutes": []
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     },
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "election_epoch": 5,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "quorum": [
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         0
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     ],
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "quorum_names": [
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "compute-0"
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     ],
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "quorum_age": 14,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "monmap": {
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "epoch": 1,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "min_mon_release_name": "reef",
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "num_mons": 1
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     },
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "osdmap": {
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "epoch": 1,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "num_osds": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "num_up_osds": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "osd_up_since": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "num_in_osds": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "osd_in_since": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "num_remapped_pgs": 0
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     },
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "pgmap": {
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "pgs_by_state": [],
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "num_pgs": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "num_pools": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "num_objects": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "data_bytes": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "bytes_used": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "bytes_avail": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "bytes_total": 0
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     },
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "fsmap": {
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "epoch": 1,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "by_rank": [],
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "up:standby": 0
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     },
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "mgrmap": {
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "available": false,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "num_standbys": 0,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "modules": [
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:             "iostat",
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:             "nfs",
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:             "restful"
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         ],
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "services": {}
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     },
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "servicemap": {
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "epoch": 1,
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "modified": "2025-10-01T13:07:57.318832+0000",
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:         "services": {}
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     },
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]:     "progress_events": {}
Oct 01 13:08:14 compute-0 vigilant_nightingale[75375]: }
Oct 01 13:08:14 compute-0 systemd[1]: libpod-12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b.scope: Deactivated successfully.
Oct 01 13:08:14 compute-0 podman[75359]: 2025-10-01 13:08:14.58985614 +0000 UTC m=+0.516137156 container died 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:14 compute-0 ceph-mgr[75103]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 01 13:08:14 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'rbd_support'
Oct 01 13:08:14 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:14.592+0000 7f0e0936f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 01 13:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c74a4c30b5bcb8e630f39318936639432c966c289c0970a8eb50fe1a988dcc1e-merged.mount: Deactivated successfully.
Oct 01 13:08:14 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1860770698' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:14 compute-0 podman[75359]: 2025-10-01 13:08:14.648022137 +0000 UTC m=+0.574303183 container remove 12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b (image=quay.io/ceph/ceph:v18, name=vigilant_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:08:14 compute-0 systemd[1]: libpod-conmon-12fafaede963fd251d6af2891946af318d2e339281f99db305ae532c7c493e7b.scope: Deactivated successfully.
Oct 01 13:08:14 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:14.897+0000 7f0e0936f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 01 13:08:14 compute-0 ceph-mgr[75103]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 01 13:08:14 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'restful'
Oct 01 13:08:15 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'rgw'
Oct 01 13:08:16 compute-0 ceph-mgr[75103]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 01 13:08:16 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'rook'
Oct 01 13:08:16 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:16.329+0000 7f0e0936f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 01 13:08:16 compute-0 podman[75413]: 2025-10-01 13:08:16.711230189 +0000 UTC m=+0.038891015 container create a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:08:16 compute-0 systemd[1]: Started libpod-conmon-a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc.scope.
Oct 01 13:08:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6769f9a7cd8a4eb3ed785fab781a2f23c9c28ce00ea17d044e64fe56a9d25a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6769f9a7cd8a4eb3ed785fab781a2f23c9c28ce00ea17d044e64fe56a9d25a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6769f9a7cd8a4eb3ed785fab781a2f23c9c28ce00ea17d044e64fe56a9d25a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:16 compute-0 podman[75413]: 2025-10-01 13:08:16.773449602 +0000 UTC m=+0.101110538 container init a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 01 13:08:16 compute-0 podman[75413]: 2025-10-01 13:08:16.778171123 +0000 UTC m=+0.105831919 container start a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 13:08:16 compute-0 podman[75413]: 2025-10-01 13:08:16.7812377 +0000 UTC m=+0.108898576 container attach a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:08:16 compute-0 podman[75413]: 2025-10-01 13:08:16.69268865 +0000 UTC m=+0.020349526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 13:08:17 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826315196' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]: 
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]: {
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "health": {
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "status": "HEALTH_OK",
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "checks": {},
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "mutes": []
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     },
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "election_epoch": 5,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "quorum": [
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         0
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     ],
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "quorum_names": [
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "compute-0"
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     ],
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "quorum_age": 16,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "monmap": {
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "epoch": 1,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "min_mon_release_name": "reef",
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "num_mons": 1
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     },
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "osdmap": {
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "epoch": 1,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "num_osds": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "num_up_osds": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "osd_up_since": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "num_in_osds": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "osd_in_since": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "num_remapped_pgs": 0
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     },
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "pgmap": {
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "pgs_by_state": [],
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "num_pgs": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "num_pools": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "num_objects": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "data_bytes": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "bytes_used": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "bytes_avail": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "bytes_total": 0
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     },
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "fsmap": {
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "epoch": 1,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "by_rank": [],
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "up:standby": 0
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     },
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "mgrmap": {
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "available": false,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "num_standbys": 0,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "modules": [
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:             "iostat",
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:             "nfs",
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:             "restful"
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         ],
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "services": {}
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     },
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "servicemap": {
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "epoch": 1,
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "modified": "2025-10-01T13:07:57.318832+0000",
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:         "services": {}
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     },
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]:     "progress_events": {}
Oct 01 13:08:17 compute-0 vibrant_kirch[75429]: }
Oct 01 13:08:17 compute-0 systemd[1]: libpod-a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc.scope: Deactivated successfully.
Oct 01 13:08:17 compute-0 podman[75413]: 2025-10-01 13:08:17.17077729 +0000 UTC m=+0.498438146 container died a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 13:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6769f9a7cd8a4eb3ed785fab781a2f23c9c28ce00ea17d044e64fe56a9d25a7-merged.mount: Deactivated successfully.
Oct 01 13:08:17 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3826315196' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:17 compute-0 podman[75413]: 2025-10-01 13:08:17.215076955 +0000 UTC m=+0.542737751 container remove a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc (image=quay.io/ceph/ceph:v18, name=vibrant_kirch, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:08:17 compute-0 systemd[1]: libpod-conmon-a20bfe0a112dfdb7ebb793eb0d74f8c672dbdf56948ea198f0fa011bea9098dc.scope: Deactivated successfully.
Oct 01 13:08:18 compute-0 ceph-mgr[75103]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 01 13:08:18 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'selftest'
Oct 01 13:08:18 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:18.370+0000 7f0e0936f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 01 13:08:18 compute-0 ceph-mgr[75103]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 01 13:08:18 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:18.613+0000 7f0e0936f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 01 13:08:18 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'snap_schedule'
Oct 01 13:08:18 compute-0 ceph-mgr[75103]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 01 13:08:18 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'stats'
Oct 01 13:08:18 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:18.854+0000 7f0e0936f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 01 13:08:19 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'status'
Oct 01 13:08:19 compute-0 podman[75470]: 2025-10-01 13:08:19.290203664 +0000 UTC m=+0.043654065 container create ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:08:19 compute-0 systemd[1]: Started libpod-conmon-ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0.scope.
Oct 01 13:08:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14826cecbea1971e60a46317debbf567dc97aa9a650cf9758a48daf3c66664c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14826cecbea1971e60a46317debbf567dc97aa9a650cf9758a48daf3c66664c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b14826cecbea1971e60a46317debbf567dc97aa9a650cf9758a48daf3c66664c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:19 compute-0 ceph-mgr[75103]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 01 13:08:19 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'telegraf'
Oct 01 13:08:19 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:19.356+0000 7f0e0936f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 01 13:08:19 compute-0 podman[75470]: 2025-10-01 13:08:19.272066389 +0000 UTC m=+0.025516760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:19 compute-0 podman[75470]: 2025-10-01 13:08:19.385541779 +0000 UTC m=+0.138992140 container init ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:08:19 compute-0 podman[75470]: 2025-10-01 13:08:19.391182778 +0000 UTC m=+0.144633139 container start ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:08:19 compute-0 podman[75470]: 2025-10-01 13:08:19.394417751 +0000 UTC m=+0.147868142 container attach ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:08:19 compute-0 ceph-mgr[75103]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 01 13:08:19 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'telemetry'
Oct 01 13:08:19 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:19.594+0000 7f0e0936f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 01 13:08:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 13:08:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3726755813' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:19 compute-0 zealous_hellman[75486]: 
Oct 01 13:08:19 compute-0 zealous_hellman[75486]: {
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "health": {
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "status": "HEALTH_OK",
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "checks": {},
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "mutes": []
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     },
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "election_epoch": 5,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "quorum": [
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         0
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     ],
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "quorum_names": [
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "compute-0"
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     ],
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "quorum_age": 19,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "monmap": {
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "epoch": 1,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "min_mon_release_name": "reef",
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "num_mons": 1
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     },
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "osdmap": {
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "epoch": 1,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "num_osds": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "num_up_osds": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "osd_up_since": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "num_in_osds": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "osd_in_since": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "num_remapped_pgs": 0
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     },
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "pgmap": {
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "pgs_by_state": [],
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "num_pgs": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "num_pools": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "num_objects": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "data_bytes": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "bytes_used": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "bytes_avail": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "bytes_total": 0
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     },
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "fsmap": {
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "epoch": 1,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "by_rank": [],
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "up:standby": 0
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     },
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "mgrmap": {
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "available": false,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "num_standbys": 0,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "modules": [
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:             "iostat",
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:             "nfs",
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:             "restful"
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         ],
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "services": {}
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     },
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "servicemap": {
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "epoch": 1,
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "modified": "2025-10-01T13:07:57.318832+0000",
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:         "services": {}
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     },
Oct 01 13:08:19 compute-0 zealous_hellman[75486]:     "progress_events": {}
Oct 01 13:08:19 compute-0 zealous_hellman[75486]: }
Oct 01 13:08:19 compute-0 systemd[1]: libpod-ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0.scope: Deactivated successfully.
Oct 01 13:08:19 compute-0 podman[75470]: 2025-10-01 13:08:19.779923653 +0000 UTC m=+0.533374014 container died ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct 01 13:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b14826cecbea1971e60a46317debbf567dc97aa9a650cf9758a48daf3c66664c-merged.mount: Deactivated successfully.
Oct 01 13:08:19 compute-0 podman[75470]: 2025-10-01 13:08:19.821075639 +0000 UTC m=+0.574526000 container remove ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0 (image=quay.io/ceph/ceph:v18, name=zealous_hellman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:08:19 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3726755813' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:19 compute-0 systemd[1]: libpod-conmon-ada587f9ccc97689e2f45f88cb17930536fba6f0db561a4fc3c15b5b3d3946a0.scope: Deactivated successfully.
Oct 01 13:08:20 compute-0 ceph-mgr[75103]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 01 13:08:20 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'test_orchestrator'
Oct 01 13:08:20 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:20.197+0000 7f0e0936f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 01 13:08:20 compute-0 ceph-mgr[75103]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 01 13:08:20 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'volumes'
Oct 01 13:08:20 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:20.834+0000 7f0e0936f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'zabbix'
Oct 01 13:08:21 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:21.512+0000 7f0e0936f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 01 13:08:21 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:21.733+0000 7f0e0936f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: ms_deliver_dispatch: unhandled message 0x5578f8d671e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.puxjpb
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr handle_mgr_map Activating!
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.puxjpb(active, starting, since 0.0388835s)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr handle_mgr_map I am now activating
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e1 all = 1
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"} v 0) v1
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: balancer
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [balancer INFO root] Starting
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: crash
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:08:21
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [balancer INFO root] No pools available
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Manager daemon compute-0.puxjpb is now available
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: devicehealth
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: iostat
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Starting
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: nfs
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: orchestrator
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: pg_autoscaler
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: progress
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [progress INFO root] Loading...
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [progress INFO root] No stored events to load
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [progress INFO root] Loaded [] historic events
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [progress INFO root] Loaded OSDMap, ready.
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support INFO root] recovery thread starting
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support INFO root] starting setup
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: rbd_support
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: restful
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"} v 0) v1
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [restful INFO root] server_addr: :: server_port: 8003
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [restful WARNING root] server not running: no certificate configured
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: status
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: telemetry
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support INFO root] PerfHandler: starting
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TaskHandler: starting
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"} v 0) v1
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: [rbd_support INFO root] setup complete
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct 01 13:08:21 compute-0 ceph-mon[74802]: Activating manager daemon compute-0.puxjpb
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mgrmap e2: compute-0.puxjpb(active, starting, since 0.0388835s)
Oct 01 13:08:21 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: Manager daemon compute-0.puxjpb is now available
Oct 01 13:08:21 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"}]: dispatch
Oct 01 13:08:21 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: volumes
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct 01 13:08:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:21 compute-0 podman[75603]: 2025-10-01 13:08:21.889662411 +0000 UTC m=+0.045449223 container create 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:08:21 compute-0 systemd[1]: Started libpod-conmon-4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd.scope.
Oct 01 13:08:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:21 compute-0 podman[75603]: 2025-10-01 13:08:21.866407903 +0000 UTC m=+0.022194705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b78f061f9bb7ddfe6d9550a52555e6d861ae611584dae5574834cb79851b72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b78f061f9bb7ddfe6d9550a52555e6d861ae611584dae5574834cb79851b72/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b78f061f9bb7ddfe6d9550a52555e6d861ae611584dae5574834cb79851b72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:21 compute-0 podman[75603]: 2025-10-01 13:08:21.984872662 +0000 UTC m=+0.140659514 container init 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:08:21 compute-0 podman[75603]: 2025-10-01 13:08:21.991184812 +0000 UTC m=+0.146971584 container start 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:08:21 compute-0 podman[75603]: 2025-10-01 13:08:21.994630751 +0000 UTC m=+0.150417553 container attach 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 13:08:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 13:08:22 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246215679' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:22 compute-0 confident_babbage[75618]: 
Oct 01 13:08:22 compute-0 confident_babbage[75618]: {
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "health": {
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "status": "HEALTH_OK",
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "checks": {},
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "mutes": []
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     },
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "election_epoch": 5,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "quorum": [
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         0
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     ],
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "quorum_names": [
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "compute-0"
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     ],
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "quorum_age": 22,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "monmap": {
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "epoch": 1,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "min_mon_release_name": "reef",
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "num_mons": 1
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     },
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "osdmap": {
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "epoch": 1,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "num_osds": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "num_up_osds": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "osd_up_since": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "num_in_osds": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "osd_in_since": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "num_remapped_pgs": 0
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     },
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "pgmap": {
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "pgs_by_state": [],
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "num_pgs": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "num_pools": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "num_objects": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "data_bytes": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "bytes_used": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "bytes_avail": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "bytes_total": 0
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     },
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "fsmap": {
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "epoch": 1,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "by_rank": [],
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "up:standby": 0
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     },
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "mgrmap": {
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "available": false,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "num_standbys": 0,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "modules": [
Oct 01 13:08:22 compute-0 confident_babbage[75618]:             "iostat",
Oct 01 13:08:22 compute-0 confident_babbage[75618]:             "nfs",
Oct 01 13:08:22 compute-0 confident_babbage[75618]:             "restful"
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         ],
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "services": {}
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     },
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "servicemap": {
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "epoch": 1,
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "modified": "2025-10-01T13:07:57.318832+0000",
Oct 01 13:08:22 compute-0 confident_babbage[75618]:         "services": {}
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     },
Oct 01 13:08:22 compute-0 confident_babbage[75618]:     "progress_events": {}
Oct 01 13:08:22 compute-0 confident_babbage[75618]: }
Oct 01 13:08:22 compute-0 systemd[1]: libpod-4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd.scope: Deactivated successfully.
Oct 01 13:08:22 compute-0 podman[75603]: 2025-10-01 13:08:22.374724752 +0000 UTC m=+0.530511524 container died 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9b78f061f9bb7ddfe6d9550a52555e6d861ae611584dae5574834cb79851b72-merged.mount: Deactivated successfully.
Oct 01 13:08:22 compute-0 podman[75603]: 2025-10-01 13:08:22.412083606 +0000 UTC m=+0.567870378 container remove 4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd (image=quay.io/ceph/ceph:v18, name=confident_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:08:22 compute-0 systemd[1]: libpod-conmon-4131b627a350cd14528e07a5aa8e06e1f0d6fd3cadfd6b17262e850ade2f13cd.scope: Deactivated successfully.
Oct 01 13:08:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.puxjpb(active, since 1.05556s)
Oct 01 13:08:22 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:22 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:22 compute-0 ceph-mon[74802]: from='mgr.14102 192.168.122.100:0/746737625' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4246215679' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:22 compute-0 ceph-mon[74802]: mgrmap e3: compute-0.puxjpb(active, since 1.05556s)
Oct 01 13:08:23 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:08:23 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.puxjpb(active, since 2s)
Oct 01 13:08:24 compute-0 podman[75656]: 2025-10-01 13:08:24.492023429 +0000 UTC m=+0.048542140 container create 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:08:24 compute-0 systemd[1]: Started libpod-conmon-2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede.scope.
Oct 01 13:08:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7749d16991bd5d18d68fe4e8b6ba518122152fbf86fdcc6cac12181be749be6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7749d16991bd5d18d68fe4e8b6ba518122152fbf86fdcc6cac12181be749be6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7749d16991bd5d18d68fe4e8b6ba518122152fbf86fdcc6cac12181be749be6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:24 compute-0 podman[75656]: 2025-10-01 13:08:24.555407001 +0000 UTC m=+0.111925702 container init 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:08:24 compute-0 podman[75656]: 2025-10-01 13:08:24.561112722 +0000 UTC m=+0.117631463 container start 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:24 compute-0 podman[75656]: 2025-10-01 13:08:24.565206561 +0000 UTC m=+0.121725272 container attach 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:08:24 compute-0 podman[75656]: 2025-10-01 13:08:24.475947859 +0000 UTC m=+0.032466590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:24 compute-0 ceph-mon[74802]: mgrmap e4: compute-0.puxjpb(active, since 2s)
Oct 01 13:08:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 01 13:08:25 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325604898' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]: 
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]: {
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "health": {
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "status": "HEALTH_OK",
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "checks": {},
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "mutes": []
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     },
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "election_epoch": 5,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "quorum": [
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         0
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     ],
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "quorum_names": [
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "compute-0"
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     ],
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "quorum_age": 24,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "monmap": {
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "epoch": 1,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "min_mon_release_name": "reef",
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "num_mons": 1
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     },
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "osdmap": {
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "epoch": 1,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "num_osds": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "num_up_osds": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "osd_up_since": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "num_in_osds": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "osd_in_since": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "num_remapped_pgs": 0
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     },
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "pgmap": {
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "pgs_by_state": [],
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "num_pgs": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "num_pools": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "num_objects": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "data_bytes": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "bytes_used": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "bytes_avail": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "bytes_total": 0
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     },
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "fsmap": {
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "epoch": 1,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "by_rank": [],
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "up:standby": 0
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     },
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "mgrmap": {
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "available": true,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "num_standbys": 0,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "modules": [
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:             "iostat",
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:             "nfs",
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:             "restful"
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         ],
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "services": {}
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     },
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "servicemap": {
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "epoch": 1,
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "modified": "2025-10-01T13:07:57.318832+0000",
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:         "services": {}
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     },
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]:     "progress_events": {}
Oct 01 13:08:25 compute-0 sleepy_torvalds[75673]: }
Oct 01 13:08:25 compute-0 systemd[1]: libpod-2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede.scope: Deactivated successfully.
Oct 01 13:08:25 compute-0 podman[75656]: 2025-10-01 13:08:25.144880223 +0000 UTC m=+0.701398964 container died 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7749d16991bd5d18d68fe4e8b6ba518122152fbf86fdcc6cac12181be749be6-merged.mount: Deactivated successfully.
Oct 01 13:08:25 compute-0 podman[75656]: 2025-10-01 13:08:25.186021089 +0000 UTC m=+0.742539790 container remove 2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede (image=quay.io/ceph/ceph:v18, name=sleepy_torvalds, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:08:25 compute-0 systemd[1]: libpod-conmon-2c6f9aad0692023906f433938745294e8da117dae6c74933d06787e1266f8ede.scope: Deactivated successfully.
Oct 01 13:08:25 compute-0 podman[75712]: 2025-10-01 13:08:25.252843119 +0000 UTC m=+0.044434370 container create 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:08:25 compute-0 systemd[1]: Started libpod-conmon-5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97.scope.
Oct 01 13:08:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:25 compute-0 podman[75712]: 2025-10-01 13:08:25.330316797 +0000 UTC m=+0.121908068 container init 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:08:25 compute-0 podman[75712]: 2025-10-01 13:08:25.237036587 +0000 UTC m=+0.028627858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:25 compute-0 podman[75712]: 2025-10-01 13:08:25.34019908 +0000 UTC m=+0.131790341 container start 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 13:08:25 compute-0 podman[75712]: 2025-10-01 13:08:25.34397619 +0000 UTC m=+0.135567471 container attach 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 13:08:25 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:08:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 01 13:08:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3229926718' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 13:08:26 compute-0 systemd[1]: libpod-5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97.scope: Deactivated successfully.
Oct 01 13:08:26 compute-0 podman[75756]: 2025-10-01 13:08:26.303829259 +0000 UTC m=+0.037123355 container died 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 13:08:26 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/325604898' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 01 13:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8020259db531343545acd9ff15970b37f7b0df206ccd19f1854348dc2554bf36-merged.mount: Deactivated successfully.
Oct 01 13:08:26 compute-0 podman[75756]: 2025-10-01 13:08:26.866237659 +0000 UTC m=+0.599531765 container remove 5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97 (image=quay.io/ceph/ceph:v18, name=elated_wilson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:26 compute-0 systemd[1]: libpod-conmon-5373d374b0310b6a23ea9c7dd5718b316ca3556d6e2baaefa0e0ce48700bfc97.scope: Deactivated successfully.
Oct 01 13:08:26 compute-0 podman[75772]: 2025-10-01 13:08:26.960905606 +0000 UTC m=+0.062520250 container create 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:08:27 compute-0 systemd[1]: Started libpod-conmon-8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064.scope.
Oct 01 13:08:27 compute-0 podman[75772]: 2025-10-01 13:08:26.92490874 +0000 UTC m=+0.026523414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d10f4d31bec9b76fed882ceffa18a5b4bd2a167d0961e0733a0bcd7cbf8787/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d10f4d31bec9b76fed882ceffa18a5b4bd2a167d0961e0733a0bcd7cbf8787/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d10f4d31bec9b76fed882ceffa18a5b4bd2a167d0961e0733a0bcd7cbf8787/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:27 compute-0 podman[75772]: 2025-10-01 13:08:27.070358133 +0000 UTC m=+0.171972797 container init 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:27 compute-0 podman[75772]: 2025-10-01 13:08:27.075848872 +0000 UTC m=+0.177463516 container start 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:27 compute-0 podman[75772]: 2025-10-01 13:08:27.0882198 +0000 UTC m=+0.189834464 container attach 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:08:27 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3229926718' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 13:08:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct 01 13:08:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735046685' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 01 13:08:27 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:08:28 compute-0 sshd-session[75812]: Received disconnect from 80.253.31.232 port 47248:11: Bye Bye [preauth]
Oct 01 13:08:28 compute-0 sshd-session[75812]: Disconnected from authenticating user root 80.253.31.232 port 47248 [preauth]
Oct 01 13:08:28 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2735046685' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct 01 13:08:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2735046685' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 01 13:08:28 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.puxjpb(active, since 6s)
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  1: '-n'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  2: 'mgr.compute-0.puxjpb'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  3: '-f'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  4: '--setuser'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  5: 'ceph'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  6: '--setgroup'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  7: 'ceph'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  8: '--default-log-to-file=false'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  9: '--default-log-to-journald=true'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr respawn  exe_path /proc/self/exe
Oct 01 13:08:28 compute-0 systemd[1]: libpod-8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064.scope: Deactivated successfully.
Oct 01 13:08:28 compute-0 podman[75816]: 2025-10-01 13:08:28.510260143 +0000 UTC m=+0.026077075 container died 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:08:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: ignoring --setuser ceph since I am not root
Oct 01 13:08:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: ignoring --setgroup ceph since I am not root
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: pidfile_write: ignore empty --pid-file
Oct 01 13:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-15d10f4d31bec9b76fed882ceffa18a5b4bd2a167d0961e0733a0bcd7cbf8787-merged.mount: Deactivated successfully.
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'alerts'
Oct 01 13:08:28 compute-0 podman[75816]: 2025-10-01 13:08:28.684708717 +0000 UTC m=+0.200525559 container remove 8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064 (image=quay.io/ceph/ceph:v18, name=trusting_pasteur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:08:28 compute-0 systemd[1]: libpod-conmon-8a76b247cefabfd49724fdd6ce24ee8c2e8b44e1db9e93492ba9ad9980cec064.scope: Deactivated successfully.
Oct 01 13:08:28 compute-0 podman[75855]: 2025-10-01 13:08:28.772042414 +0000 UTC m=+0.056049938 container create fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:08:28 compute-0 systemd[1]: Started libpod-conmon-fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37.scope.
Oct 01 13:08:28 compute-0 podman[75855]: 2025-10-01 13:08:28.737189218 +0000 UTC m=+0.021196762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5c5139002c06e035b4a5ab61a62dad4ca242da96b44b6bec98de1fe2266ea8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5c5139002c06e035b4a5ab61a62dad4ca242da96b44b6bec98de1fe2266ea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5c5139002c06e035b4a5ab61a62dad4ca242da96b44b6bec98de1fe2266ea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:28 compute-0 podman[75855]: 2025-10-01 13:08:28.870595888 +0000 UTC m=+0.154603442 container init fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:08:28 compute-0 podman[75855]: 2025-10-01 13:08:28.877700727 +0000 UTC m=+0.161708291 container start fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:08:28 compute-0 podman[75855]: 2025-10-01 13:08:28.900119262 +0000 UTC m=+0.184126796 container attach fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 13:08:28 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'balancer'
Oct 01 13:08:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:28.962+0000 7f14179b4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 13:08:29 compute-0 ceph-mgr[75103]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 13:08:29 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'cephadm'
Oct 01 13:08:29 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:29.217+0000 7f14179b4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 13:08:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 01 13:08:29 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4169176868' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 13:08:29 compute-0 ecstatic_gould[75872]: {
Oct 01 13:08:29 compute-0 ecstatic_gould[75872]:     "epoch": 5,
Oct 01 13:08:29 compute-0 ecstatic_gould[75872]:     "available": true,
Oct 01 13:08:29 compute-0 ecstatic_gould[75872]:     "active_name": "compute-0.puxjpb",
Oct 01 13:08:29 compute-0 ecstatic_gould[75872]:     "num_standby": 0
Oct 01 13:08:29 compute-0 ecstatic_gould[75872]: }
Oct 01 13:08:29 compute-0 systemd[1]: libpod-fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37.scope: Deactivated successfully.
Oct 01 13:08:29 compute-0 podman[75855]: 2025-10-01 13:08:29.460792857 +0000 UTC m=+0.744800391 container died fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:08:29 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2735046685' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct 01 13:08:29 compute-0 ceph-mon[74802]: mgrmap e5: compute-0.puxjpb(active, since 6s)
Oct 01 13:08:29 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4169176868' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 13:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a5c5139002c06e035b4a5ab61a62dad4ca242da96b44b6bec98de1fe2266ea8-merged.mount: Deactivated successfully.
Oct 01 13:08:29 compute-0 podman[75855]: 2025-10-01 13:08:29.595865319 +0000 UTC m=+0.879872843 container remove fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37 (image=quay.io/ceph/ceph:v18, name=ecstatic_gould, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:08:29 compute-0 systemd[1]: libpod-conmon-fc7022937851d63d3c5d091f3b747943b898aafbe45b7385915a22267cc97e37.scope: Deactivated successfully.
Oct 01 13:08:29 compute-0 podman[75911]: 2025-10-01 13:08:29.735337243 +0000 UTC m=+0.110793148 container create 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:29 compute-0 podman[75911]: 2025-10-01 13:08:29.659408212 +0000 UTC m=+0.034864217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:29 compute-0 systemd[1]: Started libpod-conmon-6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f.scope.
Oct 01 13:08:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8062cd5766aab92ae220aba4338148b2917823d7d6efcc07e4ee8161ceb205/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8062cd5766aab92ae220aba4338148b2917823d7d6efcc07e4ee8161ceb205/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8062cd5766aab92ae220aba4338148b2917823d7d6efcc07e4ee8161ceb205/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:29 compute-0 podman[75911]: 2025-10-01 13:08:29.835615863 +0000 UTC m=+0.211071798 container init 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:08:29 compute-0 podman[75911]: 2025-10-01 13:08:29.842947801 +0000 UTC m=+0.218403716 container start 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:08:29 compute-0 podman[75911]: 2025-10-01 13:08:29.897395019 +0000 UTC m=+0.272850954 container attach 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:08:31 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'crash'
Oct 01 13:08:31 compute-0 ceph-mgr[75103]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 13:08:31 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'dashboard'
Oct 01 13:08:31 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:31.423+0000 7f14179b4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 13:08:32 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'devicehealth'
Oct 01 13:08:33 compute-0 ceph-mgr[75103]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 13:08:33 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'diskprediction_local'
Oct 01 13:08:33 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:33.199+0000 7f14179b4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 13:08:33 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 01 13:08:33 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 01 13:08:33 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]:   from numpy import show_config as show_numpy_config
Oct 01 13:08:33 compute-0 ceph-mgr[75103]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 01 13:08:33 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:33.709+0000 7f14179b4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 01 13:08:33 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'influx'
Oct 01 13:08:33 compute-0 ceph-mgr[75103]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 01 13:08:33 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'insights'
Oct 01 13:08:33 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:33.960+0000 7f14179b4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 01 13:08:34 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'iostat'
Oct 01 13:08:34 compute-0 ceph-mgr[75103]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 01 13:08:34 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'k8sevents'
Oct 01 13:08:34 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:34.425+0000 7f14179b4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 01 13:08:36 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'localpool'
Oct 01 13:08:36 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'mds_autoscaler'
Oct 01 13:08:37 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'mirroring'
Oct 01 13:08:37 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'nfs'
Oct 01 13:08:37 compute-0 ceph-mgr[75103]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 01 13:08:37 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'orchestrator'
Oct 01 13:08:37 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:37.905+0000 7f14179b4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 01 13:08:38 compute-0 ceph-mgr[75103]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 01 13:08:38 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'osd_perf_query'
Oct 01 13:08:38 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:38.554+0000 7f14179b4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 01 13:08:38 compute-0 ceph-mgr[75103]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 01 13:08:38 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:38.824+0000 7f14179b4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 01 13:08:38 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'osd_support'
Oct 01 13:08:39 compute-0 ceph-mgr[75103]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 01 13:08:39 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'pg_autoscaler'
Oct 01 13:08:39 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:39.044+0000 7f14179b4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 01 13:08:39 compute-0 sshd-session[75962]: Received disconnect from 200.7.101.139 port 50460:11: Bye Bye [preauth]
Oct 01 13:08:39 compute-0 sshd-session[75962]: Disconnected from authenticating user root 200.7.101.139 port 50460 [preauth]
Oct 01 13:08:39 compute-0 ceph-mgr[75103]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 01 13:08:39 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'progress'
Oct 01 13:08:39 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:39.294+0000 7f14179b4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 01 13:08:39 compute-0 ceph-mgr[75103]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 01 13:08:39 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'prometheus'
Oct 01 13:08:39 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:39.520+0000 7f14179b4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 01 13:08:40 compute-0 ceph-mgr[75103]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 01 13:08:40 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'rbd_support'
Oct 01 13:08:40 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:40.507+0000 7f14179b4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 01 13:08:40 compute-0 ceph-mgr[75103]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 01 13:08:40 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:40.799+0000 7f14179b4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 01 13:08:40 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'restful'
Oct 01 13:08:41 compute-0 sshd-session[75964]: Invalid user ubuntu from 156.236.31.46 port 43656
Oct 01 13:08:41 compute-0 sshd-session[75964]: Received disconnect from 156.236.31.46 port 43656:11: Bye Bye [preauth]
Oct 01 13:08:41 compute-0 sshd-session[75964]: Disconnected from invalid user ubuntu 156.236.31.46 port 43656 [preauth]
Oct 01 13:08:41 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'rgw'
Oct 01 13:08:42 compute-0 ceph-mgr[75103]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 01 13:08:42 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:42.213+0000 7f14179b4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 01 13:08:42 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'rook'
Oct 01 13:08:44 compute-0 ceph-mgr[75103]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 01 13:08:44 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'selftest'
Oct 01 13:08:44 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:44.223+0000 7f14179b4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 01 13:08:44 compute-0 ceph-mgr[75103]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 01 13:08:44 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'snap_schedule'
Oct 01 13:08:44 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:44.461+0000 7f14179b4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 01 13:08:44 compute-0 ceph-mgr[75103]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 01 13:08:44 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'stats'
Oct 01 13:08:44 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:44.720+0000 7f14179b4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 01 13:08:44 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'status'
Oct 01 13:08:45 compute-0 ceph-mgr[75103]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 01 13:08:45 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'telegraf'
Oct 01 13:08:45 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:45.218+0000 7f14179b4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 01 13:08:45 compute-0 ceph-mgr[75103]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 01 13:08:45 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'telemetry'
Oct 01 13:08:45 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:45.453+0000 7f14179b4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 01 13:08:46 compute-0 ceph-mgr[75103]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 01 13:08:46 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'test_orchestrator'
Oct 01 13:08:46 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:46.082+0000 7f14179b4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 01 13:08:46 compute-0 ceph-mgr[75103]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 01 13:08:46 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'volumes'
Oct 01 13:08:46 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:46.747+0000 7f14179b4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr[py] Loading python module 'zabbix'
Oct 01 13:08:47 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:47.448+0000 7f14179b4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 01 13:08:47 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T13:08:47.683+0000 7f14179b4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Active manager daemon compute-0.puxjpb restarted
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.puxjpb
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: ms_deliver_dispatch: unhandled message 0x557d512dd1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr handle_mgr_map Activating!
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr handle_mgr_map I am now activating
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.puxjpb(active, starting, since 0.0140545s)
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"} v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e1 all = 1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: balancer
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Starting
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Manager daemon compute-0.puxjpb is now available
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:08:47
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] No pools available
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: Active manager daemon compute-0.puxjpb restarted
Oct 01 13:08:47 compute-0 ceph-mon[74802]: Activating manager daemon compute-0.puxjpb
Oct 01 13:08:47 compute-0 ceph-mon[74802]: osdmap e2: 0 total, 0 up, 0 in
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mgrmap e6: compute-0.puxjpb(active, starting, since 0.0140545s)
Oct 01 13:08:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.puxjpb", "id": "compute-0.puxjpb"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mon[74802]: Manager daemon compute-0.puxjpb is now available
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: cephadm
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: crash
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: devicehealth
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: iostat
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: nfs
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: orchestrator
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Starting
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: pg_autoscaler
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: progress
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [progress INFO root] Loading...
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [progress INFO root] No stored events to load
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [progress INFO root] Loaded [] historic events
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [progress INFO root] Loaded OSDMap, ready.
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] recovery thread starting
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] starting setup
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: rbd_support
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: restful
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"} v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [restful INFO root] server_addr: :: server_port: 8003
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [restful WARNING root] server not running: no certificate configured
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: status
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: telemetry
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] PerfHandler: starting
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TaskHandler: starting
Oct 01 13:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"} v 0) v1
Oct 01 13:08:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"}]: dispatch
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] setup complete
Oct 01 13:08:47 compute-0 ceph-mgr[75103]: mgr load Constructed class from module: volumes
Oct 01 13:08:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct 01 13:08:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct 01 13:08:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:48 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 01 13:08:48 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.puxjpb(active, since 1.01964s)
Oct 01 13:08:48 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 01 13:08:48 compute-0 magical_bouman[75927]: {
Oct 01 13:08:48 compute-0 magical_bouman[75927]:     "mgrmap_epoch": 7,
Oct 01 13:08:48 compute-0 magical_bouman[75927]:     "initialized": true
Oct 01 13:08:48 compute-0 magical_bouman[75927]: }
Oct 01 13:08:48 compute-0 systemd[1]: libpod-6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f.scope: Deactivated successfully.
Oct 01 13:08:48 compute-0 podman[75911]: 2025-10-01 13:08:48.725980615 +0000 UTC m=+19.101436520 container died 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-de8062cd5766aab92ae220aba4338148b2917823d7d6efcc07e4ee8161ceb205-merged.mount: Deactivated successfully.
Oct 01 13:08:48 compute-0 ceph-mon[74802]: Found migration_current of "None". Setting to last migration.
Oct 01 13:08:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:08:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:08:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/mirror_snapshot_schedule"}]: dispatch
Oct 01 13:08:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.puxjpb/trash_purge_schedule"}]: dispatch
Oct 01 13:08:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:48 compute-0 ceph-mon[74802]: mgrmap e7: compute-0.puxjpb(active, since 1.01964s)
Oct 01 13:08:48 compute-0 podman[75911]: 2025-10-01 13:08:48.781227306 +0000 UTC m=+19.156683211 container remove 6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f (image=quay.io/ceph/ceph:v18, name=magical_bouman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:48 compute-0 systemd[1]: libpod-conmon-6f2c020f2ef37f9ffdf89881292e09e4dab2f09745d9e62bd154c66f1c759d1f.scope: Deactivated successfully.
Oct 01 13:08:48 compute-0 podman[76090]: 2025-10-01 13:08:48.851503572 +0000 UTC m=+0.047976126 container create 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:08:48 compute-0 systemd[1]: Started libpod-conmon-1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003.scope.
Oct 01 13:08:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849d4908f2a099e71e6f87d0ba58708acb63e6a006cc52e7f2422d0a3501e8a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849d4908f2a099e71e6f87d0ba58708acb63e6a006cc52e7f2422d0a3501e8a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849d4908f2a099e71e6f87d0ba58708acb63e6a006cc52e7f2422d0a3501e8a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:48 compute-0 podman[76090]: 2025-10-01 13:08:48.828584896 +0000 UTC m=+0.025057470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:48 compute-0 podman[76090]: 2025-10-01 13:08:48.935591417 +0000 UTC m=+0.132063991 container init 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:08:48 compute-0 podman[76090]: 2025-10-01 13:08:48.942149283 +0000 UTC m=+0.138621837 container start 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:48 compute-0 podman[76090]: 2025-10-01 13:08:48.94530685 +0000 UTC m=+0.141779424 container attach 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:08:49 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct 01 13:08:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 13:08:49 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:08:49 compute-0 systemd[1]: libpod-1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003.scope: Deactivated successfully.
Oct 01 13:08:49 compute-0 podman[76090]: 2025-10-01 13:08:49.465296566 +0000 UTC m=+0.661769130 container died 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:08:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-849d4908f2a099e71e6f87d0ba58708acb63e6a006cc52e7f2422d0a3501e8a6-merged.mount: Deactivated successfully.
Oct 01 13:08:49 compute-0 podman[76090]: 2025-10-01 13:08:49.504237469 +0000 UTC m=+0.700710023 container remove 1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003 (image=quay.io/ceph/ceph:v18, name=dreamy_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:08:49 compute-0 systemd[1]: libpod-conmon-1202f19991b5b6ee1c3797a896db52eaa4e0901e7a7b7e50fd10e2608ee7c003.scope: Deactivated successfully.
Oct 01 13:08:49 compute-0 podman[76147]: 2025-10-01 13:08:49.556100254 +0000 UTC m=+0.035757636 container create f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:08:49 compute-0 systemd[1]: Started libpod-conmon-f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98.scope.
Oct 01 13:08:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c281d33a8538a453d752ce44d6f13a766e1f50449b01b00840c05151ad95158/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c281d33a8538a453d752ce44d6f13a766e1f50449b01b00840c05151ad95158/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c281d33a8538a453d752ce44d6f13a766e1f50449b01b00840c05151ad95158/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:49 compute-0 podman[76147]: 2025-10-01 13:08:49.539480352 +0000 UTC m=+0.019137764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:49 compute-0 podman[76147]: 2025-10-01 13:08:49.638212864 +0000 UTC m=+0.117870296 container init f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:08:49 compute-0 podman[76147]: 2025-10-01 13:08:49.643754925 +0000 UTC m=+0.123412317 container start f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 13:08:49 compute-0 podman[76147]: 2025-10-01 13:08:49.646611039 +0000 UTC m=+0.126268461 container attach f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:08:49 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:08:49 compute-0 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:49] ENGINE Bus STARTING
Oct 01 13:08:49 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:49] ENGINE Bus STARTING
Oct 01 13:08:49 compute-0 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:49] ENGINE Serving on https://192.168.122.100:7150
Oct 01 13:08:49 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:49] ENGINE Serving on https://192.168.122.100:7150
Oct 01 13:08:49 compute-0 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:49] ENGINE Client ('192.168.122.100', 38674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 01 13:08:49 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:49] ENGINE Client ('192.168.122.100', 38674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:50] ENGINE Serving on http://192.168.122.100:8765
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:50] ENGINE Serving on http://192.168.122.100:8765
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: [cephadm INFO cherrypy.error] [01/Oct/2025:13:08:50] ENGINE Bus STARTED
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : [01/Oct/2025:13:08:50] ENGINE Bus STARTED
Oct 01 13:08:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 13:08:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct 01 13:08:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: [cephadm INFO root] Set ssh ssh_user
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct 01 13:08:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct 01 13:08:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: [cephadm INFO root] Set ssh ssh_config
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct 01 13:08:50 compute-0 amazing_payne[76164]: ssh user set to ceph-admin. sudo will be used
Oct 01 13:08:50 compute-0 systemd[1]: libpod-f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98.scope: Deactivated successfully.
Oct 01 13:08:50 compute-0 podman[76213]: 2025-10-01 13:08:50.20466048 +0000 UTC m=+0.021331358 container died f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:08:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c281d33a8538a453d752ce44d6f13a766e1f50449b01b00840c05151ad95158-merged.mount: Deactivated successfully.
Oct 01 13:08:50 compute-0 podman[76213]: 2025-10-01 13:08:50.239710115 +0000 UTC m=+0.056380983 container remove f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98 (image=quay.io/ceph/ceph:v18, name=amazing_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:08:50 compute-0 systemd[1]: libpod-conmon-f4d332e14b78db858a69ac533541dbbcbaecd036dd878e1e715ed36dc2018b98.scope: Deactivated successfully.
Oct 01 13:08:50 compute-0 podman[76228]: 2025-10-01 13:08:50.313354495 +0000 UTC m=+0.046790214 container create dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019922317 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:08:50 compute-0 systemd[1]: Started libpod-conmon-dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80.scope.
Oct 01 13:08:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:50 compute-0 podman[76228]: 2025-10-01 13:08:50.388120197 +0000 UTC m=+0.121555926 container init dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:08:50 compute-0 podman[76228]: 2025-10-01 13:08:50.295676268 +0000 UTC m=+0.029111987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:50 compute-0 podman[76228]: 2025-10-01 13:08:50.397855109 +0000 UTC m=+0.131290818 container start dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:08:50 compute-0 podman[76228]: 2025-10-01 13:08:50.401468617 +0000 UTC m=+0.134904336 container attach dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:50 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.puxjpb(active, since 2s)
Oct 01 13:08:50 compute-0 ceph-mon[74802]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct 01 13:08:50 compute-0 ceph-mon[74802]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct 01 13:08:50 compute-0 ceph-mon[74802]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:08:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:08:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct 01 13:08:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: [cephadm INFO root] Set ssh ssh_identity_key
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: [cephadm INFO root] Set ssh private key
Oct 01 13:08:50 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh private key
Oct 01 13:08:50 compute-0 systemd[1]: libpod-dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80.scope: Deactivated successfully.
Oct 01 13:08:50 compute-0 conmon[76245]: conmon dffdb05c80d19a001d13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80.scope/container/memory.events
Oct 01 13:08:50 compute-0 podman[76228]: 2025-10-01 13:08:50.964051345 +0000 UTC m=+0.697487054 container died dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:08:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2369194926e74c3475595bc0298d2804a911040697237d86ebb90ccc271376d-merged.mount: Deactivated successfully.
Oct 01 13:08:51 compute-0 podman[76228]: 2025-10-01 13:08:51.015662468 +0000 UTC m=+0.749098197 container remove dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80 (image=quay.io/ceph/ceph:v18, name=focused_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:51 compute-0 systemd[1]: libpod-conmon-dffdb05c80d19a001d13d301afa7459f0b0908ba13fd5cea88cd5d79cc22fd80.scope: Deactivated successfully.
Oct 01 13:08:51 compute-0 podman[76284]: 2025-10-01 13:08:51.085055316 +0000 UTC m=+0.041899583 container create ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:08:51 compute-0 systemd[1]: Started libpod-conmon-ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4.scope.
Oct 01 13:08:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:51 compute-0 podman[76284]: 2025-10-01 13:08:51.161172344 +0000 UTC m=+0.118016691 container init ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:08:51 compute-0 podman[76284]: 2025-10-01 13:08:51.068538277 +0000 UTC m=+0.025382564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:51 compute-0 podman[76284]: 2025-10-01 13:08:51.170620935 +0000 UTC m=+0.127465202 container start ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:08:51 compute-0 podman[76284]: 2025-10-01 13:08:51.174607309 +0000 UTC m=+0.131451716 container attach ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:08:51 compute-0 ceph-mon[74802]: [01/Oct/2025:13:08:49] ENGINE Bus STARTING
Oct 01 13:08:51 compute-0 ceph-mon[74802]: [01/Oct/2025:13:08:49] ENGINE Serving on https://192.168.122.100:7150
Oct 01 13:08:51 compute-0 ceph-mon[74802]: [01/Oct/2025:13:08:49] ENGINE Client ('192.168.122.100', 38674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 01 13:08:51 compute-0 ceph-mon[74802]: [01/Oct/2025:13:08:50] ENGINE Serving on http://192.168.122.100:8765
Oct 01 13:08:51 compute-0 ceph-mon[74802]: [01/Oct/2025:13:08:50] ENGINE Bus STARTED
Oct 01 13:08:51 compute-0 ceph-mon[74802]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:51 compute-0 ceph-mon[74802]: Set ssh ssh_user
Oct 01 13:08:51 compute-0 ceph-mon[74802]: Set ssh ssh_config
Oct 01 13:08:51 compute-0 ceph-mon[74802]: ssh user set to ceph-admin. sudo will be used
Oct 01 13:08:51 compute-0 ceph-mon[74802]: mgrmap e8: compute-0.puxjpb(active, since 2s)
Oct 01 13:08:51 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:51 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct 01 13:08:51 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:51 compute-0 ceph-mgr[75103]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct 01 13:08:51 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct 01 13:08:51 compute-0 systemd[1]: libpod-ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4.scope: Deactivated successfully.
Oct 01 13:08:51 compute-0 podman[76284]: 2025-10-01 13:08:51.704319357 +0000 UTC m=+0.661163654 container died ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:08:51 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:08:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6028ad1a73da64ca5ee202e300398e08d81a84871c332925d3e27307f5e7cf7-merged.mount: Deactivated successfully.
Oct 01 13:08:51 compute-0 podman[76284]: 2025-10-01 13:08:51.753568819 +0000 UTC m=+0.710413116 container remove ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4 (image=quay.io/ceph/ceph:v18, name=amazing_antonelli, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:51 compute-0 systemd[1]: libpod-conmon-ea62c362130f3932e6d6a0e77fa242e1b08bb6fdd28523bc6037e314bcba0ef4.scope: Deactivated successfully.
Oct 01 13:08:51 compute-0 podman[76336]: 2025-10-01 13:08:51.832845235 +0000 UTC m=+0.050396062 container create 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:08:51 compute-0 systemd[1]: Started libpod-conmon-4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219.scope.
Oct 01 13:08:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17b54bdb30e9978e4ba372aceac8246e3e7beb5ad39cacda6d339994eaf2e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17b54bdb30e9978e4ba372aceac8246e3e7beb5ad39cacda6d339994eaf2e1f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17b54bdb30e9978e4ba372aceac8246e3e7beb5ad39cacda6d339994eaf2e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:51 compute-0 podman[76336]: 2025-10-01 13:08:51.817611313 +0000 UTC m=+0.035162160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:51 compute-0 podman[76336]: 2025-10-01 13:08:51.919927471 +0000 UTC m=+0.137478378 container init 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:08:51 compute-0 podman[76336]: 2025-10-01 13:08:51.928991025 +0000 UTC m=+0.146541892 container start 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:08:51 compute-0 podman[76336]: 2025-10-01 13:08:51.932852473 +0000 UTC m=+0.150403400 container attach 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:08:52 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:52 compute-0 festive_antonelli[76352]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI5xAJDkPgCIf6A0Wug1Am7fHXcOL9nUBYSVUBsn0QymGjzCb9x6M/orVCsS+sJX+rxY/wCTMF1ePsKtpvq56LE06MolWp3oieKJ9YLlvpa8DalQkzqEz7+O2HVSYRxm+qX0UaZ5TjLo3ShwHMVsALpy+Mp5QPCNCdXek22hRRix4tQQ1bSRzcONPNWVkm7cok4Oxkwg6QcPdQjwKPN0VDZn0gZb8OUjQNVaZJSIfmh3K7cGcOro6TCObnWcWwkiCs4TWUIHxB4vBHvFwRxUcV7QvAuyY52/T2cmx5XIU8RLi7enL7ADTB7WShmeglRBntpw1QYZZ6ZN/i62wO1ElM9WKUCiGJ5BMIkcJm/w/ufqyEyAPjPROX84iUoWmtYw+c6gIdg5YuRFFxBpRlOEXcC3DbSWZpQ07adU2f2HZ8jjVgSfSEe2aVAceeIsuPJFNOrFr/20LhvHNk226Ji3eM+zIGl/3mSGO7qXzNkmT1EK7NJSQTnqub98vp/1BVQI8= zuul@controller
Oct 01 13:08:52 compute-0 systemd[1]: libpod-4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219.scope: Deactivated successfully.
Oct 01 13:08:52 compute-0 podman[76336]: 2025-10-01 13:08:52.453163583 +0000 UTC m=+0.670714400 container died 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:08:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b17b54bdb30e9978e4ba372aceac8246e3e7beb5ad39cacda6d339994eaf2e1f-merged.mount: Deactivated successfully.
Oct 01 13:08:52 compute-0 ceph-mon[74802]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:52 compute-0 ceph-mon[74802]: Set ssh ssh_identity_key
Oct 01 13:08:52 compute-0 ceph-mon[74802]: Set ssh private key
Oct 01 13:08:52 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:52 compute-0 podman[76336]: 2025-10-01 13:08:52.613341827 +0000 UTC m=+0.830892654 container remove 4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219 (image=quay.io/ceph/ceph:v18, name=festive_antonelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 13:08:52 compute-0 systemd[1]: libpod-conmon-4f66ac01b926a35dc8cc1982cc47c7017437e187e04bf3012eba66b1c1c66219.scope: Deactivated successfully.
Oct 01 13:08:52 compute-0 podman[76390]: 2025-10-01 13:08:52.722330186 +0000 UTC m=+0.082146913 container create f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:08:52 compute-0 podman[76390]: 2025-10-01 13:08:52.667449129 +0000 UTC m=+0.027265876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:08:52 compute-0 systemd[1]: Started libpod-conmon-f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882.scope.
Oct 01 13:08:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:08:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b384bd758db9dc9b8f3819e8e483aca11c0bbd5e320025ddcaabf03ea029827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b384bd758db9dc9b8f3819e8e483aca11c0bbd5e320025ddcaabf03ea029827/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b384bd758db9dc9b8f3819e8e483aca11c0bbd5e320025ddcaabf03ea029827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:08:52 compute-0 podman[76390]: 2025-10-01 13:08:52.837649389 +0000 UTC m=+0.197466126 container init f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:08:52 compute-0 podman[76390]: 2025-10-01 13:08:52.848542503 +0000 UTC m=+0.208359220 container start f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:08:52 compute-0 podman[76390]: 2025-10-01 13:08:52.864613811 +0000 UTC m=+0.224430548 container attach f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:08:53 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:53 compute-0 ceph-mon[74802]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:53 compute-0 ceph-mon[74802]: Set ssh ssh_identity_pub
Oct 01 13:08:53 compute-0 ceph-mon[74802]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:53 compute-0 sshd-session[76432]: Accepted publickey for ceph-admin from 192.168.122.100 port 35086 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:53 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct 01 13:08:53 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 01 13:08:53 compute-0 systemd-logind[818]: New session 21 of user ceph-admin.
Oct 01 13:08:53 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 01 13:08:53 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct 01 13:08:53 compute-0 systemd[76436]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:53 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:08:53 compute-0 systemd[76436]: Queued start job for default target Main User Target.
Oct 01 13:08:53 compute-0 sshd-session[76450]: Accepted publickey for ceph-admin from 192.168.122.100 port 35098 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:53 compute-0 systemd[76436]: Created slice User Application Slice.
Oct 01 13:08:53 compute-0 systemd[76436]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 01 13:08:53 compute-0 systemd[76436]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 13:08:53 compute-0 systemd[76436]: Reached target Paths.
Oct 01 13:08:53 compute-0 systemd[76436]: Reached target Timers.
Oct 01 13:08:53 compute-0 systemd[76436]: Starting D-Bus User Message Bus Socket...
Oct 01 13:08:53 compute-0 systemd[76436]: Starting Create User's Volatile Files and Directories...
Oct 01 13:08:53 compute-0 systemd-logind[818]: New session 23 of user ceph-admin.
Oct 01 13:08:53 compute-0 systemd[76436]: Listening on D-Bus User Message Bus Socket.
Oct 01 13:08:53 compute-0 systemd[76436]: Reached target Sockets.
Oct 01 13:08:53 compute-0 systemd[76436]: Finished Create User's Volatile Files and Directories.
Oct 01 13:08:53 compute-0 systemd[76436]: Reached target Basic System.
Oct 01 13:08:53 compute-0 systemd[76436]: Reached target Main User Target.
Oct 01 13:08:53 compute-0 systemd[76436]: Startup finished in 125ms.
Oct 01 13:08:53 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct 01 13:08:53 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Oct 01 13:08:53 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Oct 01 13:08:53 compute-0 sshd-session[76432]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:53 compute-0 sshd-session[76450]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:53 compute-0 sudo[76457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:53 compute-0 sudo[76457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:53 compute-0 sudo[76457]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:53 compute-0 sudo[76482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:08:54 compute-0 sudo[76482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:54 compute-0 sudo[76482]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:54 compute-0 sshd-session[76507]: Accepted publickey for ceph-admin from 192.168.122.100 port 35102 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:54 compute-0 systemd-logind[818]: New session 24 of user ceph-admin.
Oct 01 13:08:54 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Oct 01 13:08:54 compute-0 sshd-session[76507]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:54 compute-0 sudo[76511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:54 compute-0 sudo[76511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:54 compute-0 sudo[76511]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:54 compute-0 sudo[76536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 01 13:08:54 compute-0 sudo[76536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:54 compute-0 sudo[76536]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:54 compute-0 sshd-session[76561]: Accepted publickey for ceph-admin from 192.168.122.100 port 35114 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:54 compute-0 systemd-logind[818]: New session 25 of user ceph-admin.
Oct 01 13:08:54 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Oct 01 13:08:54 compute-0 sshd-session[76561]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:54 compute-0 sudo[76565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:54 compute-0 sudo[76565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:54 compute-0 sudo[76565]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:54 compute-0 ceph-mon[74802]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:08:54 compute-0 sudo[76590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 01 13:08:54 compute-0 sudo[76590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:54 compute-0 sudo[76590]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:54 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct 01 13:08:54 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct 01 13:08:55 compute-0 sshd-session[76615]: Accepted publickey for ceph-admin from 192.168.122.100 port 35126 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:55 compute-0 systemd-logind[818]: New session 26 of user ceph-admin.
Oct 01 13:08:55 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Oct 01 13:08:55 compute-0 sshd-session[76615]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:55 compute-0 sudo[76619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:55 compute-0 sudo[76619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:55 compute-0 sudo[76619]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:55 compute-0 sudo[76644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:08:55 compute-0 sudo[76644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:55 compute-0 sudo[76644]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053030 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:08:55 compute-0 sshd-session[76669]: Accepted publickey for ceph-admin from 192.168.122.100 port 35138 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:55 compute-0 systemd-logind[818]: New session 27 of user ceph-admin.
Oct 01 13:08:55 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Oct 01 13:08:55 compute-0 sshd-session[76669]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:55 compute-0 sudo[76673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:55 compute-0 sudo[76673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:55 compute-0 sudo[76673]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:55 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:08:55 compute-0 sudo[76698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:08:55 compute-0 sudo[76698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:55 compute-0 sudo[76698]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:55 compute-0 ceph-mon[74802]: Deploying cephadm binary to compute-0
Oct 01 13:08:55 compute-0 sshd-session[76723]: Accepted publickey for ceph-admin from 192.168.122.100 port 35152 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:55 compute-0 systemd-logind[818]: New session 28 of user ceph-admin.
Oct 01 13:08:55 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Oct 01 13:08:55 compute-0 sshd-session[76723]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:56 compute-0 sudo[76727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:56 compute-0 sudo[76727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:56 compute-0 sudo[76727]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:56 compute-0 sudo[76752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 01 13:08:56 compute-0 sudo[76752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:56 compute-0 sudo[76752]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:56 compute-0 sshd-session[76777]: Accepted publickey for ceph-admin from 192.168.122.100 port 35160 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:56 compute-0 systemd-logind[818]: New session 29 of user ceph-admin.
Oct 01 13:08:56 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Oct 01 13:08:56 compute-0 sshd-session[76777]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:56 compute-0 sudo[76781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:56 compute-0 sudo[76781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:56 compute-0 sudo[76781]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:56 compute-0 sudo[76806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:08:56 compute-0 sudo[76806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:56 compute-0 sudo[76806]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:56 compute-0 sshd-session[76831]: Accepted publickey for ceph-admin from 192.168.122.100 port 35168 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:56 compute-0 systemd-logind[818]: New session 30 of user ceph-admin.
Oct 01 13:08:56 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Oct 01 13:08:56 compute-0 sshd-session[76831]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:56 compute-0 sudo[76835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:56 compute-0 sudo[76835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:56 compute-0 sudo[76835]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:56 compute-0 sudo[76860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 01 13:08:56 compute-0 sudo[76860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:56 compute-0 sudo[76860]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:57 compute-0 sshd-session[76885]: Accepted publickey for ceph-admin from 192.168.122.100 port 35176 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:57 compute-0 systemd-logind[818]: New session 31 of user ceph-admin.
Oct 01 13:08:57 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct 01 13:08:57 compute-0 sshd-session[76885]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:57 compute-0 sshd-session[76914]: Accepted publickey for ceph-admin from 192.168.122.100 port 35192 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:57 compute-0 systemd-logind[818]: New session 32 of user ceph-admin.
Oct 01 13:08:57 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct 01 13:08:57 compute-0 sshd-session[76914]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:57 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:08:57 compute-0 sudo[76918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:57 compute-0 sudo[76918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:57 compute-0 sudo[76918]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:57 compute-0 sudo[76943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 01 13:08:57 compute-0 sudo[76943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:57 compute-0 sudo[76943]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:58 compute-0 sshd-session[76968]: Accepted publickey for ceph-admin from 192.168.122.100 port 35198 ssh2: RSA SHA256:5ag89VD8sAqbJMbUKp6zfsUbuYfbU2aos8yozxXcakM
Oct 01 13:08:58 compute-0 systemd-logind[818]: New session 33 of user ceph-admin.
Oct 01 13:08:58 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct 01 13:08:58 compute-0 sshd-session[76968]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 01 13:08:58 compute-0 sudo[76972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:58 compute-0 sudo[76972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:58 compute-0 sudo[76972]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:58 compute-0 sudo[76997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 01 13:08:58 compute-0 sudo[76997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:58 compute-0 sudo[76997]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 13:08:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:08:59 compute-0 ceph-mgr[75103]: [cephadm INFO root] Added host compute-0
Oct 01 13:08:59 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 01 13:08:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 13:08:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:08:59 compute-0 optimistic_merkle[76406]: Added host 'compute-0' with addr '192.168.122.100'
Oct 01 13:08:59 compute-0 systemd[1]: libpod-f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882.scope: Deactivated successfully.
Oct 01 13:08:59 compute-0 sudo[77044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:59 compute-0 sudo[77044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:59 compute-0 sudo[77044]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:59 compute-0 podman[77056]: 2025-10-01 13:08:59.32192559 +0000 UTC m=+0.049473171 container died f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:08:59 compute-0 sudo[77082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:08:59 compute-0 sudo[77082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:59 compute-0 sudo[77082]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:59 compute-0 sudo[77107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:08:59 compute-0 sudo[77107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:59 compute-0 sudo[77107]: pam_unix(sudo:session): session closed for user root
Oct 01 13:08:59 compute-0 sudo[77132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Oct 01 13:08:59 compute-0 sudo[77132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:08:59 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b384bd758db9dc9b8f3819e8e483aca11c0bbd5e320025ddcaabf03ea029827-merged.mount: Deactivated successfully.
Oct 01 13:09:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:00 compute-0 ceph-mon[74802]: Added host compute-0
Oct 01 13:09:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:09:00 compute-0 sshd-session[76912]: Received disconnect from 14.103.127.7 port 35836:11: Bye Bye [preauth]
Oct 01 13:09:00 compute-0 sshd-session[76912]: Disconnected from authenticating user root 14.103.127.7 port 35836 [preauth]
Oct 01 13:09:01 compute-0 podman[77056]: 2025-10-01 13:09:01.141092109 +0000 UTC m=+1.868639650 container remove f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:01 compute-0 systemd[1]: libpod-conmon-f686caeac5c258f06e21b0d92f8af04be1ef89fbec930d436fd1a92d70e20882.scope: Deactivated successfully.
Oct 01 13:09:01 compute-0 podman[77189]: 2025-10-01 13:09:01.218152739 +0000 UTC m=+0.026731733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:01 compute-0 podman[77186]: 2025-10-01 13:09:01.224449233 +0000 UTC m=+0.037913730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:01 compute-0 podman[77189]: 2025-10-01 13:09:01.567214463 +0000 UTC m=+0.375793387 container create 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:09:01 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:09:01 compute-0 systemd[1]: Started libpod-conmon-3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d.scope.
Oct 01 13:09:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:02 compute-0 podman[77186]: 2025-10-01 13:09:02.111258575 +0000 UTC m=+0.924722972 container create 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:02 compute-0 systemd[1]: Started libpod-conmon-6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74.scope.
Oct 01 13:09:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748dda00494bd032039e6d1250e7f29fe7836888a270c97529edfece02e8a35d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748dda00494bd032039e6d1250e7f29fe7836888a270c97529edfece02e8a35d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748dda00494bd032039e6d1250e7f29fe7836888a270c97529edfece02e8a35d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:02 compute-0 podman[77189]: 2025-10-01 13:09:02.803270851 +0000 UTC m=+1.611849825 container init 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:02 compute-0 podman[77189]: 2025-10-01 13:09:02.815346097 +0000 UTC m=+1.623925001 container start 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:03 compute-0 podman[77189]: 2025-10-01 13:09:03.061368952 +0000 UTC m=+1.869947866 container attach 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:09:03 compute-0 sharp_chaplygin[77219]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct 01 13:09:03 compute-0 systemd[1]: libpod-3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d.scope: Deactivated successfully.
Oct 01 13:09:03 compute-0 podman[77189]: 2025-10-01 13:09:03.12090448 +0000 UTC m=+1.929483374 container died 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:09:03 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:09:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-459ffc7c18b85ba1c5c32fe0bc9ec1899c78781e2aeaafad1d495b45c7d15330-merged.mount: Deactivated successfully.
Oct 01 13:09:04 compute-0 podman[77186]: 2025-10-01 13:09:04.130648689 +0000 UTC m=+2.944113126 container init 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:04 compute-0 podman[77186]: 2025-10-01 13:09:04.135697499 +0000 UTC m=+2.949161916 container start 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 13:09:04 compute-0 podman[77186]: 2025-10-01 13:09:04.613640357 +0000 UTC m=+3.427104844 container attach 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:04 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:04 compute-0 ceph-mgr[75103]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct 01 13:09:04 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct 01 13:09:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 01 13:09:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:05 compute-0 blissful_bhaskara[77224]: Scheduled mon update...
Oct 01 13:09:05 compute-0 systemd[1]: libpod-6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74.scope: Deactivated successfully.
Oct 01 13:09:05 compute-0 ceph-mgr[75103]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct 01 13:09:06 compute-0 podman[77189]: 2025-10-01 13:09:06.113086435 +0000 UTC m=+4.921665349 container remove 3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d (image=quay.io/ceph/ceph:v18, name=sharp_chaplygin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:06 compute-0 podman[77186]: 2025-10-01 13:09:06.144707789 +0000 UTC m=+4.958172186 container died 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:06 compute-0 sudo[77132]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct 01 13:09:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:06 compute-0 sudo[77276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:06 compute-0 sudo[77276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:06 compute-0 sudo[77276]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:06 compute-0 sudo[77302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:06 compute-0 sudo[77302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:06 compute-0 sudo[77302]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:06 compute-0 ceph-mon[74802]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:06 compute-0 ceph-mon[74802]: Saving service mon spec with placement count:5
Oct 01 13:09:06 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:06 compute-0 sudo[77327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:06 compute-0 sudo[77327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:06 compute-0 sudo[77327]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:06 compute-0 sudo[77352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 13:09:06 compute-0 sudo[77352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-748dda00494bd032039e6d1250e7f29fe7836888a270c97529edfece02e8a35d-merged.mount: Deactivated successfully.
Oct 01 13:09:07 compute-0 podman[77186]: 2025-10-01 13:09:07.429537767 +0000 UTC m=+6.243002164 container remove 6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74 (image=quay.io/ceph/ceph:v18, name=blissful_bhaskara, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:07 compute-0 podman[77389]: 2025-10-01 13:09:07.477828907 +0000 UTC m=+0.026593957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:07 compute-0 podman[77389]: 2025-10-01 13:09:07.641699092 +0000 UTC m=+0.190464102 container create 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:07 compute-0 ceph-mgr[75103]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct 01 13:09:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:07 compute-0 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 01 13:09:07 compute-0 systemd[1]: libpod-conmon-6f188b5a4623131aae3b0c2a3d6241b17dacfb64830a28d055f9c5d5a86eac74.scope: Deactivated successfully.
Oct 01 13:09:07 compute-0 systemd[1]: Started libpod-conmon-7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76.scope.
Oct 01 13:09:07 compute-0 systemd[1]: libpod-conmon-3c67069d5d20e24885d0498a4800fb2693407eb6b11f3f1f33f6181f684e430d.scope: Deactivated successfully.
Oct 01 13:09:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb7cae3cdf549c62844fb69938ebbd4daf1df791535d4774e51ffe90b82fb94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb7cae3cdf549c62844fb69938ebbd4daf1df791535d4774e51ffe90b82fb94/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb7cae3cdf549c62844fb69938ebbd4daf1df791535d4774e51ffe90b82fb94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:07 compute-0 sudo[77352]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:08 compute-0 podman[77389]: 2025-10-01 13:09:08.042237676 +0000 UTC m=+0.591002686 container init 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:09:08 compute-0 podman[77389]: 2025-10-01 13:09:08.052787594 +0000 UTC m=+0.601552604 container start 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:08 compute-0 ceph-mon[74802]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct 01 13:09:08 compute-0 podman[77389]: 2025-10-01 13:09:08.25075715 +0000 UTC m=+0.799522170 container attach 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:09:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:08 compute-0 sudo[77437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:08 compute-0 sudo[77437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:08 compute-0 sudo[77437]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:08 compute-0 sudo[77462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:08 compute-0 sudo[77462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:08 compute-0 sudo[77462]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:08 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:08 compute-0 ceph-mgr[75103]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct 01 13:09:08 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct 01 13:09:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 13:09:08 compute-0 sudo[77487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:08 compute-0 sudo[77487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:08 compute-0 sudo[77487]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:08 compute-0 sudo[77513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:09:08 compute-0 sudo[77513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:08 compute-0 hardcore_varahamihira[77407]: Scheduled mgr update...
Oct 01 13:09:08 compute-0 systemd[1]: libpod-7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76.scope: Deactivated successfully.
Oct 01 13:09:08 compute-0 podman[77389]: 2025-10-01 13:09:08.747385201 +0000 UTC m=+1.296150251 container died 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-efb7cae3cdf549c62844fb69938ebbd4daf1df791535d4774e51ffe90b82fb94-merged.mount: Deactivated successfully.
Oct 01 13:09:09 compute-0 podman[77389]: 2025-10-01 13:09:09.421435115 +0000 UTC m=+1.970200125 container remove 7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76 (image=quay.io/ceph/ceph:v18, name=hardcore_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:09 compute-0 ceph-mon[74802]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:09 compute-0 podman[77564]: 2025-10-01 13:09:09.519813133 +0000 UTC m=+0.080125325 container create caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 13:09:09 compute-0 podman[77564]: 2025-10-01 13:09:09.460059014 +0000 UTC m=+0.020371186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:09 compute-0 systemd[1]: Started libpod-conmon-caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2.scope.
Oct 01 13:09:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df20ca8eb03adf7a64f9a8c0307967c308a81878b828e1c3076634607a9cde43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df20ca8eb03adf7a64f9a8c0307967c308a81878b828e1c3076634607a9cde43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df20ca8eb03adf7a64f9a8c0307967c308a81878b828e1c3076634607a9cde43/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:09 compute-0 podman[77564]: 2025-10-01 13:09:09.627150498 +0000 UTC m=+0.187462730 container init caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:09:09 compute-0 podman[77564]: 2025-10-01 13:09:09.632613316 +0000 UTC m=+0.192925508 container start caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:09 compute-0 podman[77564]: 2025-10-01 13:09:09.687326825 +0000 UTC m=+0.247639027 container attach caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:09:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:09 compute-0 systemd[1]: libpod-conmon-7ff959660e0fee235ec946116dbca5e8353377827666850e510b9666d41f6e76.scope: Deactivated successfully.
Oct 01 13:09:10 compute-0 podman[77647]: 2025-10-01 13:09:10.06100228 +0000 UTC m=+0.149748731 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:10 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:10 compute-0 ceph-mgr[75103]: [cephadm INFO root] Saving service crash spec with placement *
Oct 01 13:09:10 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct 01 13:09:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 01 13:09:10 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:10 compute-0 heuristic_jennings[77594]: Scheduled crash update...
Oct 01 13:09:10 compute-0 systemd[1]: libpod-caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2.scope: Deactivated successfully.
Oct 01 13:09:10 compute-0 podman[77564]: 2025-10-01 13:09:10.24499968 +0000 UTC m=+0.805311822 container died caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:09:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-df20ca8eb03adf7a64f9a8c0307967c308a81878b828e1c3076634607a9cde43-merged.mount: Deactivated successfully.
Oct 01 13:09:10 compute-0 ceph-mon[74802]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:10 compute-0 ceph-mon[74802]: Saving service mgr spec with placement count:2
Oct 01 13:09:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:10 compute-0 podman[77564]: 2025-10-01 13:09:10.908915384 +0000 UTC m=+1.469227576 container remove caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2 (image=quay.io/ceph/ceph:v18, name=heuristic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:09:10 compute-0 systemd[1]: libpod-conmon-caa7a85cc2032ef442f2fb5c8d705f48911f5e1c56c9ad1bb493bfb3c743d2a2.scope: Deactivated successfully.
Oct 01 13:09:11 compute-0 podman[77707]: 2025-10-01 13:09:11.020647811 +0000 UTC m=+0.084988996 container create 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:09:11 compute-0 podman[77707]: 2025-10-01 13:09:10.962611657 +0000 UTC m=+0.026952862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:11 compute-0 podman[77647]: 2025-10-01 13:09:11.108355674 +0000 UTC m=+1.197102115 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:11 compute-0 systemd[1]: Started libpod-conmon-0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec.scope.
Oct 01 13:09:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dae1c83a964508a4b041f2b691e9a9986cdb30e96e3a3ea9bfa45af78059e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dae1c83a964508a4b041f2b691e9a9986cdb30e96e3a3ea9bfa45af78059e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49dae1c83a964508a4b041f2b691e9a9986cdb30e96e3a3ea9bfa45af78059e2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:11 compute-0 podman[77707]: 2025-10-01 13:09:11.453692267 +0000 UTC m=+0.518033532 container init 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:11 compute-0 podman[77707]: 2025-10-01 13:09:11.459941079 +0000 UTC m=+0.524282264 container start 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:11 compute-0 podman[77707]: 2025-10-01 13:09:11.540706311 +0000 UTC m=+0.605047496 container attach 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:11 compute-0 sudo[77513]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:11 compute-0 sudo[77749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:11 compute-0 sudo[77749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:11 compute-0 sudo[77749]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:11 compute-0 sudo[77793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:11 compute-0 sudo[77793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:11 compute-0 sudo[77793]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:11 compute-0 sudo[77818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:11 compute-0 sudo[77818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:11 compute-0 sudo[77818]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:11 compute-0 ceph-mon[74802]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:11 compute-0 ceph-mon[74802]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:11 compute-0 ceph-mon[74802]: Saving service crash spec with placement *
Oct 01 13:09:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:11 compute-0 sudo[77843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:09:11 compute-0 sudo[77843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct 01 13:09:12 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2848333521' entity='client.admin' 
Oct 01 13:09:12 compute-0 systemd[1]: libpod-0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec.scope: Deactivated successfully.
Oct 01 13:09:12 compute-0 podman[77707]: 2025-10-01 13:09:12.041763614 +0000 UTC m=+1.106104809 container died 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:09:12 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77896 (sysctl)
Oct 01 13:09:12 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 01 13:09:12 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 01 13:09:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-49dae1c83a964508a4b041f2b691e9a9986cdb30e96e3a3ea9bfa45af78059e2-merged.mount: Deactivated successfully.
Oct 01 13:09:12 compute-0 podman[77707]: 2025-10-01 13:09:12.359053608 +0000 UTC m=+1.423394833 container remove 0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec (image=quay.io/ceph/ceph:v18, name=nervous_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:12 compute-0 systemd[1]: libpod-conmon-0f3eabf0cbe9910e0e5fc811c5accbe3f70d0623e38b623af8517cb15f781aec.scope: Deactivated successfully.
Oct 01 13:09:12 compute-0 sudo[77843]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:12 compute-0 podman[77906]: 2025-10-01 13:09:12.409648438 +0000 UTC m=+0.025388905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:12 compute-0 podman[77906]: 2025-10-01 13:09:12.538229528 +0000 UTC m=+0.153969945 container create fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:12 compute-0 sshd-session[77714]: Received disconnect from 27.254.137.144 port 39278:11: Bye Bye [preauth]
Oct 01 13:09:12 compute-0 sshd-session[77714]: Disconnected from authenticating user root 27.254.137.144 port 39278 [preauth]
Oct 01 13:09:12 compute-0 systemd[1]: Started libpod-conmon-fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94.scope.
Oct 01 13:09:12 compute-0 sudo[77932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:12 compute-0 sudo[77932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:12 compute-0 sudo[77932]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d0fd50d22869f99d25d986093a42ebe90efe3597a67290ad2db8f570621a9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d0fd50d22869f99d25d986093a42ebe90efe3597a67290ad2db8f570621a9b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79d0fd50d22869f99d25d986093a42ebe90efe3597a67290ad2db8f570621a9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:12 compute-0 sudo[77963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:12 compute-0 sudo[77963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:12 compute-0 sudo[77963]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:12 compute-0 podman[77906]: 2025-10-01 13:09:12.716468937 +0000 UTC m=+0.332209364 container init fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:12 compute-0 podman[77906]: 2025-10-01 13:09:12.722833484 +0000 UTC m=+0.338573901 container start fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:09:12 compute-0 sudo[77988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:12 compute-0 sudo[77988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:12 compute-0 sudo[77988]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:12 compute-0 podman[77906]: 2025-10-01 13:09:12.7915499 +0000 UTC m=+0.407290357 container attach fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:12 compute-0 sudo[78015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 01 13:09:12 compute-0 sudo[78015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:13 compute-0 sudo[78015]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:13 compute-0 ceph-mon[74802]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:13 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2848333521' entity='client.admin' 
Oct 01 13:09:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:13 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct 01 13:09:13 compute-0 sudo[78077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:13 compute-0 sudo[78077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:13 compute-0 sudo[78077]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:13 compute-0 systemd[1]: libpod-fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94.scope: Deactivated successfully.
Oct 01 13:09:13 compute-0 podman[77906]: 2025-10-01 13:09:13.47721804 +0000 UTC m=+1.092958457 container died fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:09:13 compute-0 sudo[78103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:13 compute-0 sudo[78103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:13 compute-0 sudo[78103]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:13 compute-0 sudo[78131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:13 compute-0 sudo[78131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:13 compute-0 sudo[78131]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:13 compute-0 sudo[78166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- inventory --format=json-pretty --filter-for-batch
Oct 01 13:09:13 compute-0 sudo[78166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-79d0fd50d22869f99d25d986093a42ebe90efe3597a67290ad2db8f570621a9b-merged.mount: Deactivated successfully.
Oct 01 13:09:14 compute-0 podman[77906]: 2025-10-01 13:09:14.037154333 +0000 UTC m=+1.652894790 container remove fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94 (image=quay.io/ceph/ceph:v18, name=kind_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:14 compute-0 systemd[1]: libpod-conmon-fe3693578b4a450845fd44011d42f94549078af01a1bf732741b8ce631e52e94.scope: Deactivated successfully.
Oct 01 13:09:14 compute-0 podman[78203]: 2025-10-01 13:09:14.117414213 +0000 UTC m=+0.044366151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:14 compute-0 podman[78203]: 2025-10-01 13:09:14.265008579 +0000 UTC m=+0.191960457 container create ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:14 compute-0 systemd[1]: Started libpod-conmon-ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657.scope.
Oct 01 13:09:14 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:14 compute-0 ceph-mon[74802]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:14 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8f2ca689e834664f25f6bd12a356a347fd66b880be5333b9dca340eeec2cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8f2ca689e834664f25f6bd12a356a347fd66b880be5333b9dca340eeec2cc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be8f2ca689e834664f25f6bd12a356a347fd66b880be5333b9dca340eeec2cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:14 compute-0 podman[78203]: 2025-10-01 13:09:14.661290247 +0000 UTC m=+0.588242115 container init ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:09:14 compute-0 podman[78203]: 2025-10-01 13:09:14.671254821 +0000 UTC m=+0.598206699 container start ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 13:09:14 compute-0 podman[78203]: 2025-10-01 13:09:14.798466881 +0000 UTC m=+0.725418749 container attach ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:14 compute-0 podman[78250]: 2025-10-01 13:09:14.852253689 +0000 UTC m=+0.026038973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:14 compute-0 podman[78250]: 2025-10-01 13:09:14.983901913 +0000 UTC m=+0.157687157 container create 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:09:15 compute-0 systemd[1]: Started libpod-conmon-36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3.scope.
Oct 01 13:09:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:15 compute-0 podman[78250]: 2025-10-01 13:09:15.2876791 +0000 UTC m=+0.461464394 container init 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:09:15 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 13:09:15 compute-0 podman[78250]: 2025-10-01 13:09:15.298347314 +0000 UTC m=+0.472132568 container start 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:15 compute-0 peaceful_chandrasekhar[78285]: 167 167
Oct 01 13:09:15 compute-0 systemd[1]: libpod-36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3.scope: Deactivated successfully.
Oct 01 13:09:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:15 compute-0 podman[78250]: 2025-10-01 13:09:15.423755965 +0000 UTC m=+0.597541249 container attach 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:09:15 compute-0 podman[78250]: 2025-10-01 13:09:15.424215865 +0000 UTC m=+0.598001109 container died 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:09:15 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:15 compute-0 ceph-mgr[75103]: [cephadm INFO root] Added label _admin to host compute-0
Oct 01 13:09:15 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct 01 13:09:15 compute-0 festive_edison[78245]: Added label _admin to host compute-0
Oct 01 13:09:15 compute-0 systemd[1]: libpod-ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657.scope: Deactivated successfully.
Oct 01 13:09:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:15 compute-0 ceph-mon[74802]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e6a32a66fb274c8b91170120b8411a32db8c0f36be6e257df8c4f88da4b1dee-merged.mount: Deactivated successfully.
Oct 01 13:09:16 compute-0 podman[78250]: 2025-10-01 13:09:16.628551044 +0000 UTC m=+1.802336308 container remove 36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:09:16 compute-0 podman[78203]: 2025-10-01 13:09:16.635441393 +0000 UTC m=+2.562393261 container died ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5be8f2ca689e834664f25f6bd12a356a347fd66b880be5333b9dca340eeec2cc-merged.mount: Deactivated successfully.
Oct 01 13:09:17 compute-0 ceph-mon[74802]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:17 compute-0 ceph-mon[74802]: Added label _admin to host compute-0
Oct 01 13:09:17 compute-0 ceph-mon[74802]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:17 compute-0 podman[78203]: 2025-10-01 13:09:17.307506223 +0000 UTC m=+3.234458071 container remove ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657 (image=quay.io/ceph/ceph:v18, name=festive_edison, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:17 compute-0 systemd[1]: libpod-conmon-36b46a96440cd069b27f689e550f511b299d3c180fe07d2cd6fbfd04bf30a0b3.scope: Deactivated successfully.
Oct 01 13:09:17 compute-0 systemd[1]: libpod-conmon-ba45e649901c4c560cde524e4295c6154bd52a52d2f3d77d898781ae2df99657.scope: Deactivated successfully.
Oct 01 13:09:17 compute-0 podman[78318]: 2025-10-01 13:09:17.387357887 +0000 UTC m=+0.041368415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:17 compute-0 podman[78318]: 2025-10-01 13:09:17.527996208 +0000 UTC m=+0.182006736 container create 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:17 compute-0 systemd[1]: Started libpod-conmon-5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a.scope.
Oct 01 13:09:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a10f85c1003d7a0147a4b79e976eaec3aa2c041d451249628a00ab9ca46486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a10f85c1003d7a0147a4b79e976eaec3aa2c041d451249628a00ab9ca46486/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a10f85c1003d7a0147a4b79e976eaec3aa2c041d451249628a00ab9ca46486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:17 compute-0 podman[78318]: 2025-10-01 13:09:17.731359741 +0000 UTC m=+0.385370269 container init 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:17 compute-0 podman[78318]: 2025-10-01 13:09:17.741908576 +0000 UTC m=+0.395919064 container start 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:09:17 compute-0 podman[78318]: 2025-10-01 13:09:17.844312954 +0000 UTC m=+0.498323442 container attach 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:09:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct 01 13:09:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3755830514' entity='client.admin' 
Oct 01 13:09:18 compute-0 systemd[1]: libpod-5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a.scope: Deactivated successfully.
Oct 01 13:09:18 compute-0 podman[78318]: 2025-10-01 13:09:18.364969014 +0000 UTC m=+1.018979562 container died 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-42a10f85c1003d7a0147a4b79e976eaec3aa2c041d451249628a00ab9ca46486-merged.mount: Deactivated successfully.
Oct 01 13:09:18 compute-0 podman[78318]: 2025-10-01 13:09:18.563226295 +0000 UTC m=+1.217236793 container remove 5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a (image=quay.io/ceph/ceph:v18, name=eager_gagarin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:09:18 compute-0 systemd[1]: libpod-conmon-5ccef933fd86814ba645de57851195a327ad2360d761c796b3601661cc5b790a.scope: Deactivated successfully.
Oct 01 13:09:18 compute-0 podman[78374]: 2025-10-01 13:09:18.659424286 +0000 UTC m=+0.068840305 container create 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:09:18 compute-0 systemd[1]: Started libpod-conmon-56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063.scope.
Oct 01 13:09:18 compute-0 podman[78374]: 2025-10-01 13:09:18.619465458 +0000 UTC m=+0.028881547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:18 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57516a8beee4386c62188d7b8fa32e388ec055c5d15327cbb28d539507a3d2a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57516a8beee4386c62188d7b8fa32e388ec055c5d15327cbb28d539507a3d2a8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57516a8beee4386c62188d7b8fa32e388ec055c5d15327cbb28d539507a3d2a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:18 compute-0 podman[78374]: 2025-10-01 13:09:18.755373341 +0000 UTC m=+0.164789350 container init 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:09:18 compute-0 podman[78374]: 2025-10-01 13:09:18.765561044 +0000 UTC m=+0.174977093 container start 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:09:18 compute-0 podman[78374]: 2025-10-01 13:09:18.786023273 +0000 UTC m=+0.195439382 container attach 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:09:19 compute-0 ceph-mon[74802]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:19 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3755830514' entity='client.admin' 
Oct 01 13:09:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct 01 13:09:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1421745930' entity='client.admin' 
Oct 01 13:09:19 compute-0 relaxed_pasteur[78390]: set mgr/dashboard/cluster/status
Oct 01 13:09:19 compute-0 systemd[1]: libpod-56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063.scope: Deactivated successfully.
Oct 01 13:09:19 compute-0 podman[78416]: 2025-10-01 13:09:19.517925495 +0000 UTC m=+0.037153070 container died 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-57516a8beee4386c62188d7b8fa32e388ec055c5d15327cbb28d539507a3d2a8-merged.mount: Deactivated successfully.
Oct 01 13:09:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:19 compute-0 podman[78416]: 2025-10-01 13:09:19.719048667 +0000 UTC m=+0.238276262 container remove 56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063 (image=quay.io/ceph/ceph:v18, name=relaxed_pasteur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:09:19 compute-0 systemd[1]: libpod-conmon-56771fbf481273f796e9bc6f9828f0d78ceca53ce73356ac21e7b13341b28063.scope: Deactivated successfully.
Oct 01 13:09:19 compute-0 sudo[73780]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:20 compute-0 podman[78438]: 2025-10-01 13:09:20.001949773 +0000 UTC m=+0.084055458 container create b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 13:09:20 compute-0 podman[78438]: 2025-10-01 13:09:19.945091678 +0000 UTC m=+0.027197443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:20 compute-0 systemd[1]: Started libpod-conmon-b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831.scope.
Oct 01 13:09:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:20 compute-0 podman[78438]: 2025-10-01 13:09:20.121875658 +0000 UTC m=+0.203981393 container init b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:20 compute-0 podman[78438]: 2025-10-01 13:09:20.129903032 +0000 UTC m=+0.212008727 container start b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:09:20 compute-0 sudo[78483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkroiwvkayfmikxtgiugyrjkzecyehnu ; /usr/bin/python3'
Oct 01 13:09:20 compute-0 sudo[78483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:20 compute-0 podman[78438]: 2025-10-01 13:09:20.225053051 +0000 UTC m=+0.307158776 container attach b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 13:09:20 compute-0 python3[78485]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:09:20 compute-0 podman[78486]: 2025-10-01 13:09:20.398170604 +0000 UTC m=+0.067478072 container create 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:09:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:20 compute-0 systemd[1]: Started libpod-conmon-066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b.scope.
Oct 01 13:09:20 compute-0 podman[78486]: 2025-10-01 13:09:20.363850904 +0000 UTC m=+0.033158402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e1f62c187c916da046910a564c678b23a9913325fd8b771497893d3f42faf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286e1f62c187c916da046910a564c678b23a9913325fd8b771497893d3f42faf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:20 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1421745930' entity='client.admin' 
Oct 01 13:09:20 compute-0 podman[78486]: 2025-10-01 13:09:20.569043165 +0000 UTC m=+0.238350713 container init 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:09:20 compute-0 podman[78486]: 2025-10-01 13:09:20.580466568 +0000 UTC m=+0.249774026 container start 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:09:20 compute-0 podman[78486]: 2025-10-01 13:09:20.596908869 +0000 UTC m=+0.266216417 container attach 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:09:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct 01 13:09:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/653286515' entity='client.admin' 
Oct 01 13:09:21 compute-0 systemd[1]: libpod-066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b.scope: Deactivated successfully.
Oct 01 13:09:21 compute-0 podman[78486]: 2025-10-01 13:09:21.176019833 +0000 UTC m=+0.845327331 container died 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 13:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-286e1f62c187c916da046910a564c678b23a9913325fd8b771497893d3f42faf-merged.mount: Deactivated successfully.
Oct 01 13:09:21 compute-0 great_carson[78454]: [
Oct 01 13:09:21 compute-0 great_carson[78454]:     {
Oct 01 13:09:21 compute-0 great_carson[78454]:         "available": false,
Oct 01 13:09:21 compute-0 great_carson[78454]:         "ceph_device": false,
Oct 01 13:09:21 compute-0 great_carson[78454]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 01 13:09:21 compute-0 great_carson[78454]:         "lsm_data": {},
Oct 01 13:09:21 compute-0 great_carson[78454]:         "lvs": [],
Oct 01 13:09:21 compute-0 great_carson[78454]:         "path": "/dev/sr0",
Oct 01 13:09:21 compute-0 great_carson[78454]:         "rejected_reasons": [
Oct 01 13:09:21 compute-0 great_carson[78454]:             "Has a FileSystem",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "Insufficient space (<5GB)"
Oct 01 13:09:21 compute-0 great_carson[78454]:         ],
Oct 01 13:09:21 compute-0 great_carson[78454]:         "sys_api": {
Oct 01 13:09:21 compute-0 great_carson[78454]:             "actuators": null,
Oct 01 13:09:21 compute-0 great_carson[78454]:             "device_nodes": "sr0",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "devname": "sr0",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "human_readable_size": "482.00 KB",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "id_bus": "ata",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "model": "QEMU DVD-ROM",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "nr_requests": "2",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "parent": "/dev/sr0",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "partitions": {},
Oct 01 13:09:21 compute-0 great_carson[78454]:             "path": "/dev/sr0",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "removable": "1",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "rev": "2.5+",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "ro": "0",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "rotational": "0",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "sas_address": "",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "sas_device_handle": "",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "scheduler_mode": "mq-deadline",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "sectors": 0,
Oct 01 13:09:21 compute-0 great_carson[78454]:             "sectorsize": "2048",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "size": 493568.0,
Oct 01 13:09:21 compute-0 great_carson[78454]:             "support_discard": "2048",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "type": "disk",
Oct 01 13:09:21 compute-0 great_carson[78454]:             "vendor": "QEMU"
Oct 01 13:09:21 compute-0 great_carson[78454]:         }
Oct 01 13:09:21 compute-0 great_carson[78454]:     }
Oct 01 13:09:21 compute-0 great_carson[78454]: ]
Oct 01 13:09:21 compute-0 systemd[1]: libpod-b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831.scope: Deactivated successfully.
Oct 01 13:09:21 compute-0 systemd[1]: libpod-b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831.scope: Consumed 1.388s CPU time.
Oct 01 13:09:21 compute-0 podman[78486]: 2025-10-01 13:09:21.532464643 +0000 UTC m=+1.201772101 container remove 066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b (image=quay.io/ceph/ceph:v18, name=hopeful_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:09:21 compute-0 systemd[1]: libpod-conmon-066e47cc6896f6c9941afbdeb7d899a6b6ab92a413d17256fc18b5c130c3e81b.scope: Deactivated successfully.
Oct 01 13:09:21 compute-0 ceph-mon[74802]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:21 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/653286515' entity='client.admin' 
Oct 01 13:09:21 compute-0 podman[78438]: 2025-10-01 13:09:21.548395858 +0000 UTC m=+1.630501573 container died b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:09:21 compute-0 sudo[78483]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-37eb9d3cf27ff8fabec7d6ab31f78aa1386ba3e39790bc60afc10175cdebc633-merged.mount: Deactivated successfully.
Oct 01 13:09:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:21 compute-0 podman[80082]: 2025-10-01 13:09:21.85323766 +0000 UTC m=+0.345547304 container remove b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:09:21 compute-0 systemd[1]: libpod-conmon-b9ddd06d2b820716ef3c6e048f8fdb25d2df6a0f542a21d5d6b56484613df831.scope: Deactivated successfully.
Oct 01 13:09:21 compute-0 sudo[78166]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:22 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 13:09:22 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:09:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:22 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:09:22 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:09:22 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct 01 13:09:22 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct 01 13:09:22 compute-0 sudo[80141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:22 compute-0 sudo[80141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80141]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 01 13:09:22 compute-0 sudo[80192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80192]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:22 compute-0 sudo[80222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80222]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph
Oct 01 13:09:22 compute-0 sudo[80247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80247]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:22 compute-0 sudo[80290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80290]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.conf.new
Oct 01 13:09:22 compute-0 sudo[80341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80341]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeqxwbstvskfzykzhicsqjnwgxvfkzna ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759324161.9630404-33747-74233129533190/async_wrapper.py j18096652907 30 /home/zuul/.ansible/tmp/ansible-tmp-1759324161.9630404-33747-74233129533190/AnsiballZ_command.py _'
Oct 01 13:09:22 compute-0 sudo[80400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:22 compute-0 sudo[80391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:22 compute-0 sudo[80391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80391]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:22 compute-0 sudo[80422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80422]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:22 compute-0 sudo[80447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 ansible-async_wrapper.py[80418]: Invoked with j18096652907 30 /home/zuul/.ansible/tmp/ansible-tmp-1759324161.9630404-33747-74233129533190/AnsiballZ_command.py _
Oct 01 13:09:22 compute-0 sudo[80447]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 ansible-async_wrapper.py[80474]: Starting module and watcher
Oct 01 13:09:22 compute-0 ansible-async_wrapper.py[80474]: Start watching 80475 (30)
Oct 01 13:09:22 compute-0 ansible-async_wrapper.py[80475]: Start module (80475)
Oct 01 13:09:22 compute-0 ansible-async_wrapper.py[80418]: Return async_wrapper task started.
Oct 01 13:09:22 compute-0 sudo[80400]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.conf.new
Oct 01 13:09:22 compute-0 sudo[80476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80476]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:22 compute-0 sudo[80525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80525]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 python3[80477]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:09:22 compute-0 sudo[80550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.conf.new
Oct 01 13:09:22 compute-0 sudo[80550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80550]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 sudo[80588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:22 compute-0 sudo[80588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:22 compute-0 sudo[80588]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:22 compute-0 podman[80554]: 2025-10-01 13:09:22.958582111 +0000 UTC m=+0.098062202 container create c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:09:22 compute-0 podman[80554]: 2025-10-01 13:09:22.881097772 +0000 UTC m=+0.020577903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:22 compute-0 sudo[80613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.conf.new
Oct 01 13:09:22 compute-0 sudo[80613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80613]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 ceph-mon[74802]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:09:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:09:23 compute-0 ceph-mon[74802]: Updating compute-0:/etc/ceph/ceph.conf
Oct 01 13:09:23 compute-0 systemd[1]: Started libpod-conmon-c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4.scope.
Oct 01 13:09:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:23 compute-0 sudo[80638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb42b8706fcf0851bfa1a1259c674872c31d794584243019a480605bed6b2994/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb42b8706fcf0851bfa1a1259c674872c31d794584243019a480605bed6b2994/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:23 compute-0 sudo[80638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80638]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 podman[80554]: 2025-10-01 13:09:23.10007228 +0000 UTC m=+0.239552401 container init c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:23 compute-0 podman[80554]: 2025-10-01 13:09:23.106643949 +0000 UTC m=+0.246124050 container start c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:23 compute-0 sudo[80669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 01 13:09:23 compute-0 sudo[80669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 podman[80554]: 2025-10-01 13:09:23.114807068 +0000 UTC m=+0.254287169 container attach c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:09:23 compute-0 sudo[80669]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf
Oct 01 13:09:23 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf
Oct 01 13:09:23 compute-0 sudo[80695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:23 compute-0 sudo[80695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80695]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[80720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config
Oct 01 13:09:23 compute-0 sudo[80720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80720]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[80745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:23 compute-0 sudo[80745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80745]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[80770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config
Oct 01 13:09:23 compute-0 sudo[80770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80770]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[80795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:23 compute-0 sudo[80795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80795]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[80837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf.new
Oct 01 13:09:23 compute-0 sudo[80837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80837]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[80864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:23 compute-0 sudo[80864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80864]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[80889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:23 compute-0 sudo[80889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80889]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[80914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:23 compute-0 sudo[80914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80914]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:09:23 compute-0 tender_mirzakhani[80664]: 
Oct 01 13:09:23 compute-0 tender_mirzakhani[80664]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 01 13:09:23 compute-0 systemd[1]: libpod-c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4.scope: Deactivated successfully.
Oct 01 13:09:23 compute-0 podman[80554]: 2025-10-01 13:09:23.644588736 +0000 UTC m=+0.784068847 container died c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:23 compute-0 sudo[80939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf.new
Oct 01 13:09:23 compute-0 sudo[80939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80939]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb42b8706fcf0851bfa1a1259c674872c31d794584243019a480605bed6b2994-merged.mount: Deactivated successfully.
Oct 01 13:09:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:23 compute-0 sudo[80999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:23 compute-0 sudo[80999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[80999]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[81047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf.new
Oct 01 13:09:23 compute-0 sudo[81047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[81047]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 podman[80554]: 2025-10-01 13:09:23.836235767 +0000 UTC m=+0.975715868 container remove c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4 (image=quay.io/ceph/ceph:v18, name=tender_mirzakhani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 13:09:23 compute-0 ansible-async_wrapper.py[80475]: Module complete (80475)
Oct 01 13:09:23 compute-0 systemd[1]: libpod-conmon-c10b396bbc6b8b9d90ad7b9284dd4ff44051e2701f28347fdcf268e960373cd4.scope: Deactivated successfully.
Oct 01 13:09:23 compute-0 sudo[81072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:23 compute-0 sudo[81072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[81072]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:23 compute-0 sudo[81097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf.new
Oct 01 13:09:23 compute-0 sudo[81097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:23 compute-0 sudo[81097]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81159]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbdccxssydqtnimqoydjfwncttghrdoz ; /usr/bin/python3'
Oct 01 13:09:24 compute-0 sudo[81159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:24 compute-0 sudo[81130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:24 compute-0 sudo[81130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 ceph-mon[74802]: Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf
Oct 01 13:09:24 compute-0 sudo[81130]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf.new /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.conf
Oct 01 13:09:24 compute-0 sudo[81173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81173]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 01 13:09:24 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 01 13:09:24 compute-0 sudo[81198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:24 compute-0 sudo[81198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81198]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 python3[81170]: ansible-ansible.legacy.async_status Invoked with jid=j18096652907.80418 mode=status _async_dir=/root/.ansible_async
Oct 01 13:09:24 compute-0 sudo[81159]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 01 13:09:24 compute-0 sudo[81223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81223]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:24 compute-0 sudo[81256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81256]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph
Oct 01 13:09:24 compute-0 sudo[81300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81300]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agmogfiatrkiarqogvwkvmrkmgivgihk ; /usr/bin/python3'
Oct 01 13:09:24 compute-0 sudo[81342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:24 compute-0 sudo[81346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:24 compute-0 sudo[81346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81346]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.client.admin.keyring.new
Oct 01 13:09:24 compute-0 sudo[81372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81372]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 python3[81347]: ansible-ansible.legacy.async_status Invoked with jid=j18096652907.80418 mode=cleanup _async_dir=/root/.ansible_async
Oct 01 13:09:24 compute-0 sudo[81342]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:24 compute-0 sudo[81397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81397]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:24 compute-0 sudo[81422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81422]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:24 compute-0 sudo[81447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81447]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.client.admin.keyring.new
Oct 01 13:09:24 compute-0 sudo[81472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81472]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:24 compute-0 sudo[81520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81520]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzqpzqjxfpuaxhdkyyvxacmxrnvkilwb ; /usr/bin/python3'
Oct 01 13:09:24 compute-0 sudo[81586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:24 compute-0 sudo[81550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.client.admin.keyring.new
Oct 01 13:09:24 compute-0 sudo[81550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81550]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:24 compute-0 sudo[81596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:24 compute-0 sudo[81596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:24 compute-0 sudo[81596]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 sudo[81621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.client.admin.keyring.new
Oct 01 13:09:25 compute-0 sudo[81621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81621]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 python3[81593]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:09:25 compute-0 sudo[81646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:25 compute-0 sudo[81646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 ceph-mon[74802]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:09:25 compute-0 ceph-mon[74802]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:25 compute-0 sudo[81646]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 ceph-mon[74802]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 01 13:09:25 compute-0 sudo[81586]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 sudo[81673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 01 13:09:25 compute-0 sudo[81673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81673]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring
Oct 01 13:09:25 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring
Oct 01 13:09:25 compute-0 sudo[81698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:25 compute-0 sudo[81698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81698]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 sudo[81723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config
Oct 01 13:09:25 compute-0 sudo[81723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81723]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 sudo[81748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:25 compute-0 sudo[81748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81748]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 sudo[81773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config
Oct 01 13:09:25 compute-0 sudo[81773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81773]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 sudo[81820]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obrzkykuwptfnjhghngkqepaedalsndo ; /usr/bin/python3'
Oct 01 13:09:25 compute-0 sudo[81820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:25 compute-0 sudo[81822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:25 compute-0 sudo[81822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81822]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 sudo[81849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring.new
Oct 01 13:09:25 compute-0 sudo[81849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81849]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 sudo[81874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:25 compute-0 sudo[81874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81874]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 python3[81831]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:09:25 compute-0 sudo[81899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:25 compute-0 sudo[81899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81899]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 sudo[81936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:25 compute-0 podman[81905]: 2025-10-01 13:09:25.698965969 +0000 UTC m=+0.099937543 container create 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:25 compute-0 sudo[81936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81936]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:25 compute-0 podman[81905]: 2025-10-01 13:09:25.63567104 +0000 UTC m=+0.036642634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:25 compute-0 systemd[1]: Started libpod-conmon-48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c.scope.
Oct 01 13:09:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:25 compute-0 sudo[81962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring.new
Oct 01 13:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135342bfa440cacd91b0997364837fc7e2077c2cafd0623b3f38c123badf5da/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135342bfa440cacd91b0997364837fc7e2077c2cafd0623b3f38c123badf5da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0135342bfa440cacd91b0997364837fc7e2077c2cafd0623b3f38c123badf5da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:25 compute-0 sudo[81962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[81962]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:25 compute-0 podman[81905]: 2025-10-01 13:09:25.826108482 +0000 UTC m=+0.227080076 container init 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:25 compute-0 podman[81905]: 2025-10-01 13:09:25.836892024 +0000 UTC m=+0.237863608 container start 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:25 compute-0 podman[81905]: 2025-10-01 13:09:25.854205374 +0000 UTC m=+0.255176978 container attach 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:09:25 compute-0 sudo[82017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:25 compute-0 sudo[82017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:25 compute-0 sudo[82017]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 sudo[82042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring.new
Oct 01 13:09:26 compute-0 sudo[82042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:26 compute-0 sudo[82042]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 sudo[82067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:26 compute-0 sudo[82067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:26 compute-0 sudo[82067]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 ceph-mon[74802]: Updating compute-0:/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring
Oct 01 13:09:26 compute-0 sudo[82093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring.new
Oct 01 13:09:26 compute-0 sudo[82093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:26 compute-0 sudo[82093]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 sudo[82136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:26 compute-0 sudo[82136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:26 compute-0 sudo[82136]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 sudo[82161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-eb4b6ead-01d1-53b3-a52a-47dcc600555f/var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring.new /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/config/ceph.client.admin.keyring
Oct 01 13:09:26 compute-0 sudo[82161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:26 compute-0 sudo[82161]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:09:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:26 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev b537f5e1-c19e-4fd8-ab75-e750d5a49393 (Updating crash deployment (+1 -> 1))
Oct 01 13:09:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct 01 13:09:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 01 13:09:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 01 13:09:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:26 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:26 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct 01 13:09:26 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct 01 13:09:26 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:09:26 compute-0 nice_dijkstra[81987]: 
Oct 01 13:09:26 compute-0 nice_dijkstra[81987]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 01 13:09:26 compute-0 systemd[1]: libpod-48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c.scope: Deactivated successfully.
Oct 01 13:09:26 compute-0 podman[81905]: 2025-10-01 13:09:26.377702803 +0000 UTC m=+0.778674427 container died 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 13:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0135342bfa440cacd91b0997364837fc7e2077c2cafd0623b3f38c123badf5da-merged.mount: Deactivated successfully.
Oct 01 13:09:26 compute-0 sudo[82186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:26 compute-0 sudo[82186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:26 compute-0 sudo[82186]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 podman[81905]: 2025-10-01 13:09:26.441924851 +0000 UTC m=+0.842896435 container remove 48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c (image=quay.io/ceph/ceph:v18, name=nice_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:26 compute-0 systemd[1]: libpod-conmon-48f1db6ef89968be087023005ecf8c4f393921406ad3507da6fa67429b7b4b7c.scope: Deactivated successfully.
Oct 01 13:09:26 compute-0 sudo[81820]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 sudo[82227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:26 compute-0 sudo[82227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:26 compute-0 sudo[82227]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 sudo[82252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:26 compute-0 sudo[82252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:26 compute-0 sudo[82252]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:26 compute-0 sudo[82277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:26 compute-0 sudo[82277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:26 compute-0 sudo[82337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irqvzcxugimapsqiyqszygczvzjqbpxp ; /usr/bin/python3'
Oct 01 13:09:26 compute-0 sudo[82337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:26 compute-0 python3[82342]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:09:26 compute-0 podman[82369]: 2025-10-01 13:09:26.989269797 +0000 UTC m=+0.044657268 container create 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 13:09:27 compute-0 systemd[1]: Started libpod-conmon-79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2.scope.
Oct 01 13:09:27 compute-0 podman[82383]: 2025-10-01 13:09:27.037547809 +0000 UTC m=+0.046454345 container create 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:09:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:27 compute-0 podman[82369]: 2025-10-01 13:09:27.055442106 +0000 UTC m=+0.110829597 container init 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:09:27 compute-0 systemd[1]: Started libpod-conmon-7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62.scope.
Oct 01 13:09:27 compute-0 podman[82369]: 2025-10-01 13:09:27.061914042 +0000 UTC m=+0.117301513 container start 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 13:09:27 compute-0 infallible_williams[82398]: 167 167
Oct 01 13:09:27 compute-0 podman[82369]: 2025-10-01 13:09:27.064399921 +0000 UTC m=+0.119787392 container attach 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:09:27 compute-0 podman[82369]: 2025-10-01 13:09:26.970187852 +0000 UTC m=+0.025575353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:27 compute-0 podman[82369]: 2025-10-01 13:09:27.06561947 +0000 UTC m=+0.121006941 container died 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:27 compute-0 systemd[1]: libpod-79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2.scope: Deactivated successfully.
Oct 01 13:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c534a2a3892954c2e12b5b230d8a88dc0e509f0e6432b3a629a6f50442416e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c534a2a3892954c2e12b5b230d8a88dc0e509f0e6432b3a629a6f50442416e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c534a2a3892954c2e12b5b230d8a88dc0e509f0e6432b3a629a6f50442416e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:27 compute-0 podman[82383]: 2025-10-01 13:09:27.08390594 +0000 UTC m=+0.092812496 container init 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d66ada92d76aee509fa6ed157ad1a8177b1ed36deb14923fdb6e851a9145a4e4-merged.mount: Deactivated successfully.
Oct 01 13:09:27 compute-0 podman[82383]: 2025-10-01 13:09:27.091056796 +0000 UTC m=+0.099963332 container start 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:09:27 compute-0 podman[82369]: 2025-10-01 13:09:27.101752526 +0000 UTC m=+0.157139997 container remove 79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:27 compute-0 podman[82383]: 2025-10-01 13:09:27.013604359 +0000 UTC m=+0.022510915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:27 compute-0 podman[82383]: 2025-10-01 13:09:27.114268763 +0000 UTC m=+0.123175299 container attach 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:27 compute-0 systemd[1]: libpod-conmon-79da8d932ac619d4ba612141b29ff648e9eead1161679c51b55a1d8bd9c321d2.scope: Deactivated successfully.
Oct 01 13:09:27 compute-0 systemd[1]: Reloading.
Oct 01 13:09:27 compute-0 systemd-sysv-generator[82447]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:09:27 compute-0 systemd-rc-local-generator[82444]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:09:27 compute-0 ceph-mon[74802]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 01 13:09:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 01 13:09:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:27 compute-0 ceph-mon[74802]: Deploying daemon crash.compute-0 on compute-0
Oct 01 13:09:27 compute-0 ceph-mon[74802]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:09:27 compute-0 systemd[1]: Reloading.
Oct 01 13:09:27 compute-0 systemd-rc-local-generator[82509]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:09:27 compute-0 systemd-sysv-generator[82513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:09:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct 01 13:09:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/197864987' entity='client.admin' 
Oct 01 13:09:27 compute-0 podman[82383]: 2025-10-01 13:09:27.637355399 +0000 UTC m=+0.646261985 container died 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:27 compute-0 ansible-async_wrapper.py[80474]: Done in kid B.
Oct 01 13:09:27 compute-0 systemd[1]: libpod-7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62.scope: Deactivated successfully.
Oct 01 13:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-40c534a2a3892954c2e12b5b230d8a88dc0e509f0e6432b3a629a6f50442416e-merged.mount: Deactivated successfully.
Oct 01 13:09:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:27 compute-0 systemd[1]: Starting Ceph crash.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:09:27 compute-0 podman[82383]: 2025-10-01 13:09:27.733746648 +0000 UTC m=+0.742653184 container remove 7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62 (image=quay.io/ceph/ceph:v18, name=condescending_noyce, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct 01 13:09:27 compute-0 systemd[1]: libpod-conmon-7fd345efc2b004c4bbf5c4a5a4cccbddaf76124575e950475e2e084ca750df62.scope: Deactivated successfully.
Oct 01 13:09:27 compute-0 sudo[82337]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:27 compute-0 sudo[82601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqfmjyzjtmragaafuclgoalaequpghfp ; /usr/bin/python3'
Oct 01 13:09:27 compute-0 sudo[82601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:28 compute-0 podman[82602]: 2025-10-01 13:09:28.000697177 +0000 UTC m=+0.057530976 container create 0abeef01559daebfdceaeda5aaeac65b95dae3bdfefab887df54718451fda229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a1c7d5717c682ede20a896c95a4dfa8369d903589dee8ccb33f41d34a51d91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a1c7d5717c682ede20a896c95a4dfa8369d903589dee8ccb33f41d34a51d91/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a1c7d5717c682ede20a896c95a4dfa8369d903589dee8ccb33f41d34a51d91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a1c7d5717c682ede20a896c95a4dfa8369d903589dee8ccb33f41d34a51d91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:28 compute-0 podman[82602]: 2025-10-01 13:09:27.984632388 +0000 UTC m=+0.041466187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:28 compute-0 podman[82602]: 2025-10-01 13:09:28.080209041 +0000 UTC m=+0.137042860 container init 0abeef01559daebfdceaeda5aaeac65b95dae3bdfefab887df54718451fda229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:28 compute-0 podman[82602]: 2025-10-01 13:09:28.086068227 +0000 UTC m=+0.142902026 container start 0abeef01559daebfdceaeda5aaeac65b95dae3bdfefab887df54718451fda229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:09:28 compute-0 bash[82602]: 0abeef01559daebfdceaeda5aaeac65b95dae3bdfefab887df54718451fda229
Oct 01 13:09:28 compute-0 systemd[1]: Started Ceph crash.compute-0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:09:28 compute-0 sudo[82277]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev b537f5e1-c19e-4fd8-ab75-e750d5a49393 (Updating crash deployment (+1 -> 1))
Oct 01 13:09:28 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event b537f5e1-c19e-4fd8-ab75-e750d5a49393 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct 01 13:09:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a73487e5-39de-4df2-b136-c7a6912a3a4b does not exist
Oct 01 13:09:28 compute-0 python3[82610]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:09:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev 1f0e416d-5877-454d-9b58-832a4a0a9061 (Updating mgr deployment (+1 -> 2))
Oct 01 13:09:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 01 13:09:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 13:09:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:28 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.hktmnz on compute-0
Oct 01 13:09:28 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.hktmnz on compute-0
Oct 01 13:09:28 compute-0 podman[82624]: 2025-10-01 13:09:28.223008241 +0000 UTC m=+0.041937711 container create 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:09:28 compute-0 sudo[82625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:28 compute-0 sudo[82625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:28 compute-0 sudo[82625]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:28 compute-0 systemd[1]: Started libpod-conmon-3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5.scope.
Oct 01 13:09:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f549c2d72000b356cbe21a24713332eeb27b836252797f0ca5141db864c8d42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f549c2d72000b356cbe21a24713332eeb27b836252797f0ca5141db864c8d42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f549c2d72000b356cbe21a24713332eeb27b836252797f0ca5141db864c8d42/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:28 compute-0 sudo[82662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:28 compute-0 sudo[82662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:28 compute-0 sudo[82662]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:28 compute-0 podman[82624]: 2025-10-01 13:09:28.30112292 +0000 UTC m=+0.120052410 container init 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:28 compute-0 podman[82624]: 2025-10-01 13:09:28.208429669 +0000 UTC m=+0.027359159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 01 13:09:28 compute-0 podman[82624]: 2025-10-01 13:09:28.309803695 +0000 UTC m=+0.128733165 container start 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:09:28 compute-0 podman[82624]: 2025-10-01 13:09:28.313373889 +0000 UTC m=+0.132303379 container attach 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:09:28 compute-0 sudo[82694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:28 compute-0 sudo[82694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:28 compute-0 sudo[82694]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:28 compute-0 sudo[82720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:28 compute-0 sudo[82720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.474+0000 7f10d4f50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 01 13:09:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.474+0000 7f10d4f50640 -1 AuthRegistry(0x7f10d0066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 01 13:09:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.476+0000 7f10d4f50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 01 13:09:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.476+0000 7f10d4f50640 -1 AuthRegistry(0x7f10d4f4f000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 01 13:09:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.477+0000 7f10ce575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 01 13:09:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: 2025-10-01T13:09:28.477+0000 7f10d4f50640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 01 13:09:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 01 13:09:28 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-crash-compute-0[82619]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/197864987' entity='client.admin' 
Oct 01 13:09:28 compute-0 ceph-mon[74802]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hktmnz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 13:09:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:28 compute-0 ceph-mon[74802]: Deploying daemon mgr.compute-0.hktmnz on compute-0
Oct 01 13:09:28 compute-0 podman[82814]: 2025-10-01 13:09:28.719773973 +0000 UTC m=+0.035415435 container create 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 13:09:28 compute-0 systemd[1]: Started libpod-conmon-472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c.scope.
Oct 01 13:09:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:28 compute-0 podman[82814]: 2025-10-01 13:09:28.784429595 +0000 UTC m=+0.100071087 container init 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:28 compute-0 podman[82814]: 2025-10-01 13:09:28.794480583 +0000 UTC m=+0.110122035 container start 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:28 compute-0 angry_chatterjee[82831]: 167 167
Oct 01 13:09:28 compute-0 systemd[1]: libpod-472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c.scope: Deactivated successfully.
Oct 01 13:09:28 compute-0 podman[82814]: 2025-10-01 13:09:28.797957324 +0000 UTC m=+0.113598816 container attach 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:09:28 compute-0 conmon[82831]: conmon 472a2c2ff36b238ab2db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c.scope/container/memory.events
Oct 01 13:09:28 compute-0 podman[82814]: 2025-10-01 13:09:28.799478102 +0000 UTC m=+0.115119604 container died 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 13:09:28 compute-0 podman[82814]: 2025-10-01 13:09:28.704085825 +0000 UTC m=+0.019727307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct 01 13:09:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5f9348c047441dcefd72c59677954f52992c61bee3293388ccba0c8b726b579-merged.mount: Deactivated successfully.
Oct 01 13:09:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4066853649' entity='client.admin' 
Oct 01 13:09:28 compute-0 podman[82814]: 2025-10-01 13:09:28.844948155 +0000 UTC m=+0.160589617 container remove 472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:28 compute-0 systemd[1]: libpod-3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5.scope: Deactivated successfully.
Oct 01 13:09:28 compute-0 podman[82624]: 2025-10-01 13:09:28.853576509 +0000 UTC m=+0.672506379 container died 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:28 compute-0 systemd[1]: libpod-conmon-472a2c2ff36b238ab2db7bfb182e97c9de45991fd3f00fb6a6f837bfe150392c.scope: Deactivated successfully.
Oct 01 13:09:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f549c2d72000b356cbe21a24713332eeb27b836252797f0ca5141db864c8d42-merged.mount: Deactivated successfully.
Oct 01 13:09:28 compute-0 podman[82624]: 2025-10-01 13:09:28.899914798 +0000 UTC m=+0.718844268 container remove 3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5 (image=quay.io/ceph/ceph:v18, name=relaxed_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:09:28 compute-0 systemd[1]: Reloading.
Oct 01 13:09:28 compute-0 sudo[82601]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:28 compute-0 systemd-rc-local-generator[82888]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:09:28 compute-0 systemd-sysv-generator[82891]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:09:29 compute-0 systemd[1]: libpod-conmon-3a6c5033a5d7ec791cc2b54dae309a517e71741e7d20ce323ccae0bdd146c9f5.scope: Deactivated successfully.
Oct 01 13:09:29 compute-0 sudo[82926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwmhtocnrpprzlkutissnwauclfzicer ; /usr/bin/python3'
Oct 01 13:09:29 compute-0 sudo[82926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:29 compute-0 systemd[1]: Reloading.
Oct 01 13:09:29 compute-0 systemd-rc-local-generator[82959]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:09:29 compute-0 systemd-sysv-generator[82963]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:09:29 compute-0 python3[82931]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:09:29 compute-0 podman[82966]: 2025-10-01 13:09:29.382192291 +0000 UTC m=+0.046352371 container create 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:09:29 compute-0 sshd-session[82764]: Received disconnect from 80.253.31.232 port 49590:11: Bye Bye [preauth]
Oct 01 13:09:29 compute-0 sshd-session[82764]: Disconnected from authenticating user root 80.253.31.232 port 49590 [preauth]
Oct 01 13:09:29 compute-0 systemd[1]: Started libpod-conmon-42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26.scope.
Oct 01 13:09:29 compute-0 systemd[1]: Starting Ceph mgr.compute-0.hktmnz for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:09:29 compute-0 podman[82966]: 2025-10-01 13:09:29.360228624 +0000 UTC m=+0.024388704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b653bd8c30adbaf5f3ce8f9fb29ac941ad66d9a07af7a269d7f63207fb942/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b653bd8c30adbaf5f3ce8f9fb29ac941ad66d9a07af7a269d7f63207fb942/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b653bd8c30adbaf5f3ce8f9fb29ac941ad66d9a07af7a269d7f63207fb942/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:29 compute-0 podman[82966]: 2025-10-01 13:09:29.47955211 +0000 UTC m=+0.143712210 container init 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:29 compute-0 podman[82966]: 2025-10-01 13:09:29.485792648 +0000 UTC m=+0.149952728 container start 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 13:09:29 compute-0 podman[82966]: 2025-10-01 13:09:29.489146505 +0000 UTC m=+0.153306595 container attach 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:29 compute-0 podman[83035]: 2025-10-01 13:09:29.671571633 +0000 UTC m=+0.046645212 container create 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda/merged/var/lib/ceph/mgr/ceph-compute-0.hktmnz supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:29 compute-0 podman[83035]: 2025-10-01 13:09:29.729571423 +0000 UTC m=+0.104645052 container init 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:29 compute-0 podman[83035]: 2025-10-01 13:09:29.740641044 +0000 UTC m=+0.115714623 container start 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:29 compute-0 bash[83035]: 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b
Oct 01 13:09:29 compute-0 podman[83035]: 2025-10-01 13:09:29.65100776 +0000 UTC m=+0.026081379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:29 compute-0 systemd[1]: Started Ceph mgr.compute-0.hktmnz for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:09:29 compute-0 sudo[82720]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:29 compute-0 ceph-mgr[83054]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:09:29 compute-0 ceph-mgr[83054]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 01 13:09:29 compute-0 ceph-mgr[83054]: pidfile_write: ignore empty --pid-file
Oct 01 13:09:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:29 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:29 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 13:09:29 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:29 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev 1f0e416d-5877-454d-9b58-832a4a0a9061 (Updating mgr deployment (+1 -> 2))
Oct 01 13:09:29 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event 1f0e416d-5877-454d-9b58-832a4a0a9061 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct 01 13:09:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 13:09:29 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:29 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4066853649' entity='client.admin' 
Oct 01 13:09:29 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:29 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:29 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:29 compute-0 sudo[83098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:29 compute-0 sudo[83098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:29 compute-0 sudo[83098]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:29 compute-0 ceph-mgr[83054]: mgr[py] Loading python module 'alerts'
Oct 01 13:09:29 compute-0 sudo[83123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:09:29 compute-0 sudo[83123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:29 compute-0 sudo[83123]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:29 compute-0 sudo[83148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:29 compute-0 sudo[83148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:30 compute-0 sudo[83148]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:30 compute-0 sudo[83173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct 01 13:09:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/315915558' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 01 13:09:30 compute-0 sudo[83173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:30 compute-0 sudo[83173]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:30 compute-0 sudo[83199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:30 compute-0 sudo[83199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:30 compute-0 sudo[83199]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:30 compute-0 sudo[83224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:09:30 compute-0 sudo[83224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:30 compute-0 ceph-mgr[83054]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 13:09:30 compute-0 ceph-mgr[83054]: mgr[py] Loading python module 'balancer'
Oct 01 13:09:30 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz[83050]: 2025-10-01T13:09:30.231+0000 7f903defc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:30 compute-0 ceph-mgr[83054]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 13:09:30 compute-0 ceph-mgr[83054]: mgr[py] Loading python module 'cephadm'
Oct 01 13:09:30 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz[83050]: 2025-10-01T13:09:30.471+0000 7f903defc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 01 13:09:30 compute-0 podman[83322]: 2025-10-01 13:09:30.585111918 +0000 UTC m=+0.053532210 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:30 compute-0 podman[83322]: 2025-10-01 13:09:30.676096225 +0000 UTC m=+0.144516497 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:09:30 compute-0 ceph-mon[74802]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:30 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:30 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/315915558' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct 01 13:09:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/315915558' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct 01 13:09:30 compute-0 distracted_gates[82983]: set require_min_compat_client to mimic
Oct 01 13:09:30 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct 01 13:09:30 compute-0 systemd[1]: libpod-42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26.scope: Deactivated successfully.
Oct 01 13:09:30 compute-0 podman[82966]: 2025-10-01 13:09:30.86586142 +0000 UTC m=+1.530021500 container died 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-825b653bd8c30adbaf5f3ce8f9fb29ac941ad66d9a07af7a269d7f63207fb942-merged.mount: Deactivated successfully.
Oct 01 13:09:30 compute-0 podman[82966]: 2025-10-01 13:09:30.911132069 +0000 UTC m=+1.575292149 container remove 42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26 (image=quay.io/ceph/ceph:v18, name=distracted_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:09:30 compute-0 systemd[1]: libpod-conmon-42732c066729e7bc0544d0267016d1ec5deee858fce78c5b6bb638309281df26.scope: Deactivated successfully.
Oct 01 13:09:30 compute-0 sudo[82926]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:30 compute-0 sudo[83224]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:30 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:09:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:09:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:09:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:30 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a3c69b1d-7c82-4e91-9c1a-39d73b79f7d4 does not exist
Oct 01 13:09:30 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev b9c83e13-d6c6-4df6-95a8-2a52343164a5 does not exist
Oct 01 13:09:30 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7bc16f79-54bb-4aa6-a2ab-1c2ea3b1ff75 does not exist
Oct 01 13:09:31 compute-0 sudo[83421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:31 compute-0 sudo[83421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:31 compute-0 sudo[83421]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:31 compute-0 sudo[83446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:09:31 compute-0 sudo[83446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:31 compute-0 sudo[83446]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct 01 13:09:31 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct 01 13:09:31 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct 01 13:09:31 compute-0 sudo[83471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:31 compute-0 sudo[83471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:31 compute-0 sudo[83471]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:31 compute-0 sudo[83496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:31 compute-0 sudo[83496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:31 compute-0 sudo[83496]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:31 compute-0 sudo[83521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:31 compute-0 sudo[83521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:31 compute-0 sudo[83521]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:31 compute-0 sudo[83546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:31 compute-0 sudo[83546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:31 compute-0 sudo[83611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erjtjbaqrabtwixmjfmrxwdtyzaokfem ; /usr/bin/python3'
Oct 01 13:09:31 compute-0 sudo[83611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:31 compute-0 podman[83612]: 2025-10-01 13:09:31.565300885 +0000 UTC m=+0.037430699 container create 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:09:31 compute-0 systemd[1]: Started libpod-conmon-5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f.scope.
Oct 01 13:09:31 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:31 compute-0 podman[83612]: 2025-10-01 13:09:31.635557774 +0000 UTC m=+0.107687618 container init 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 13:09:31 compute-0 podman[83612]: 2025-10-01 13:09:31.641587203 +0000 UTC m=+0.113717017 container start 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 13:09:31 compute-0 podman[83612]: 2025-10-01 13:09:31.644779042 +0000 UTC m=+0.116908896 container attach 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:31 compute-0 podman[83612]: 2025-10-01 13:09:31.549275117 +0000 UTC m=+0.021404961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:31 compute-0 interesting_mayer[83630]: 167 167
Oct 01 13:09:31 compute-0 systemd[1]: libpod-5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f.scope: Deactivated successfully.
Oct 01 13:09:31 compute-0 podman[83612]: 2025-10-01 13:09:31.649543815 +0000 UTC m=+0.121673659 container died 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:09:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-27e6ca2310d38829e568d5263231d171e30350151f041fd588b4d68178864dcf-merged.mount: Deactivated successfully.
Oct 01 13:09:31 compute-0 podman[83612]: 2025-10-01 13:09:31.691859251 +0000 UTC m=+0.163989095 container remove 5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 13:09:31 compute-0 python3[83620]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:09:31 compute-0 systemd[1]: libpod-conmon-5d09eaf9e0356276dab0a1624f6bef66a12746e54fa919177175639835d8756f.scope: Deactivated successfully.
Oct 01 13:09:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:31 compute-0 sudo[83546]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.puxjpb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.puxjpb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.puxjpb (unknown last config time)...
Oct 01 13:09:31 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.puxjpb (unknown last config time)...
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.puxjpb on compute-0
Oct 01 13:09:31 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.puxjpb on compute-0
Oct 01 13:09:31 compute-0 podman[83647]: 2025-10-01 13:09:31.761235154 +0000 UTC m=+0.050550307 container create 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:09:31 compute-0 systemd[1]: Started libpod-conmon-34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e.scope.
Oct 01 13:09:31 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:31 compute-0 sudo[83662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f9120419dec7154dee1cf2513d39083511f63bbe08f0bde570548656fdb702d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:31 compute-0 sudo[83662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f9120419dec7154dee1cf2513d39083511f63bbe08f0bde570548656fdb702d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f9120419dec7154dee1cf2513d39083511f63bbe08f0bde570548656fdb702d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:31 compute-0 sudo[83662]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:31 compute-0 podman[83647]: 2025-10-01 13:09:31.831889894 +0000 UTC m=+0.121205067 container init 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 13:09:31 compute-0 podman[83647]: 2025-10-01 13:09:31.837557063 +0000 UTC m=+0.126872206 container start 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:31 compute-0 podman[83647]: 2025-10-01 13:09:31.743982301 +0000 UTC m=+0.033297454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:31 compute-0 podman[83647]: 2025-10-01 13:09:31.840847695 +0000 UTC m=+0.130162868 container attach 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/315915558' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct 01 13:09:31 compute-0 ceph-mon[74802]: osdmap e3: 0 total, 0 up, 0 in
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.puxjpb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 13:09:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:31 compute-0 sudo[83698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:31 compute-0 sudo[83698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:31 compute-0 sudo[83698]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:31 compute-0 sudo[83729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:31 compute-0 sudo[83729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:31 compute-0 sudo[83729]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:32 compute-0 sudo[83754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:32 compute-0 sudo[83754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:32 compute-0 podman[83815]: 2025-10-01 13:09:32.263797394 +0000 UTC m=+0.056482203 container create 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:09:32 compute-0 systemd[1]: Started libpod-conmon-2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768.scope.
Oct 01 13:09:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:32 compute-0 podman[83815]: 2025-10-01 13:09:32.244487873 +0000 UTC m=+0.037172702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:32 compute-0 podman[83815]: 2025-10-01 13:09:32.345282677 +0000 UTC m=+0.137967506 container init 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:32 compute-0 podman[83815]: 2025-10-01 13:09:32.356323727 +0000 UTC m=+0.149008526 container start 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:09:32 compute-0 podman[83815]: 2025-10-01 13:09:32.359447894 +0000 UTC m=+0.152132693 container attach 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 01 13:09:32 compute-0 priceless_curie[83831]: 167 167
Oct 01 13:09:32 compute-0 systemd[1]: libpod-2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768.scope: Deactivated successfully.
Oct 01 13:09:32 compute-0 podman[83815]: 2025-10-01 13:09:32.363988731 +0000 UTC m=+0.156673560 container died 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4162976ac10eefd9141540c45f7893b7e53bd2318c510cc31b605f5b20ffd9f8-merged.mount: Deactivated successfully.
Oct 01 13:09:32 compute-0 ceph-mgr[83054]: mgr[py] Loading python module 'crash'
Oct 01 13:09:32 compute-0 podman[83815]: 2025-10-01 13:09:32.417212932 +0000 UTC m=+0.209897741 container remove 2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:32 compute-0 systemd[1]: libpod-conmon-2adcd1582c940f25e2de938816cdbd47470bec746b75484f81281e706b480768.scope: Deactivated successfully.
Oct 01 13:09:32 compute-0 sudo[83849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:32 compute-0 sudo[83849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:32 compute-0 sudo[83849]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:32 compute-0 sudo[83754]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 sudo[83876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:32 compute-0 sudo[83876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:32 compute-0 sudo[83876]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:32 compute-0 sudo[83877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:32 compute-0 sudo[83877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:32 compute-0 sudo[83877]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:32 compute-0 sudo[83925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:32 compute-0 sudo[83925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:32 compute-0 sudo[83925]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:32 compute-0 sudo[83933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:32 compute-0 sudo[83933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:32 compute-0 sudo[83933]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:32 compute-0 sudo[83979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:32 compute-0 sudo[83979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:32 compute-0 sudo[83975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Oct 01 13:09:32 compute-0 sudo[83979]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:32 compute-0 sudo[83975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:32 compute-0 ceph-mgr[83054]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 13:09:32 compute-0 ceph-mgr[83054]: mgr[py] Loading python module 'dashboard'
Oct 01 13:09:32 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz[83050]: 2025-10-01T13:09:32.676+0000 7f903defc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 01 13:09:32 compute-0 sudo[84026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:09:32 compute-0 sudo[84026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: [progress INFO root] Writing back 2 completed events
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 sudo[83975]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: [cephadm INFO root] Added host compute-0
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Added host compute-0
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct 01 13:09:32 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct 01 13:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct 01 13:09:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:32 compute-0 modest_panini[83687]: Added host 'compute-0' with addr '192.168.122.100'
Oct 01 13:09:32 compute-0 modest_panini[83687]: Scheduled mon update...
Oct 01 13:09:32 compute-0 modest_panini[83687]: Scheduled mgr update...
Oct 01 13:09:32 compute-0 modest_panini[83687]: Scheduled osd.default_drive_group update...
Oct 01 13:09:32 compute-0 systemd[1]: libpod-34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e.scope: Deactivated successfully.
Oct 01 13:09:32 compute-0 podman[83647]: 2025-10-01 13:09:32.966313256 +0000 UTC m=+1.255628399 container died 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f9120419dec7154dee1cf2513d39083511f63bbe08f0bde570548656fdb702d-merged.mount: Deactivated successfully.
Oct 01 13:09:33 compute-0 podman[83647]: 2025-10-01 13:09:33.025465703 +0000 UTC m=+1.314780886 container remove 34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e (image=quay.io/ceph/ceph:v18, name=modest_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:09:33 compute-0 systemd[1]: libpod-conmon-34168d554fd1f6da4c2e0343511b26d67bdb7473a6ff6683a3de63b8ca662f3e.scope: Deactivated successfully.
Oct 01 13:09:33 compute-0 sudo[83611]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:33 compute-0 podman[84154]: 2025-10-01 13:09:33.244321033 +0000 UTC m=+0.063994223 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:09:33 compute-0 podman[84154]: 2025-10-01 13:09:33.355125398 +0000 UTC m=+0.174798568 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:33 compute-0 sudo[84207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajyofhsnvasvgmourkqzzspvlvcoaewg ; /usr/bin/python3'
Oct 01 13:09:33 compute-0 sudo[84207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:09:33 compute-0 ceph-mon[74802]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:33 compute-0 ceph-mon[74802]: Reconfiguring mgr.compute-0.puxjpb (unknown last config time)...
Oct 01 13:09:33 compute-0 ceph-mon[74802]: Reconfiguring daemon mgr.compute-0.puxjpb on compute-0
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 python3[84216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:09:33 compute-0 podman[84249]: 2025-10-01 13:09:33.589763762 +0000 UTC m=+0.045636330 container create a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:33 compute-0 systemd[1]: Started libpod-conmon-a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b.scope.
Oct 01 13:09:33 compute-0 sudo[84026]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a42c1ffa769ecc114547b6a2434fac0f21e788b6168aeb7f29d16b29c5fc12b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a42c1ffa769ecc114547b6a2434fac0f21e788b6168aeb7f29d16b29c5fc12b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a42c1ffa769ecc114547b6a2434fac0f21e788b6168aeb7f29d16b29c5fc12b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:33 compute-0 podman[84249]: 2025-10-01 13:09:33.569897825 +0000 UTC m=+0.025770413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:09:33 compute-0 podman[84249]: 2025-10-01 13:09:33.663977591 +0000 UTC m=+0.119850159 container init a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 podman[84249]: 2025-10-01 13:09:33.67786259 +0000 UTC m=+0.133735148 container start a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 podman[84249]: 2025-10-01 13:09:33.685792992 +0000 UTC m=+0.141665560 container attach a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:09:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:09:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:09:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:09:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 88a29694-f9e2-449a-beb3-f5af6db07171 does not exist
Oct 01 13:09:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct 01 13:09:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:33 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev 8bc42142-0e21-440d-84c2-e86a31779c5d (Updating mgr deployment (-1 -> 1))
Oct 01 13:09:33 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.hktmnz from compute-0 -- ports [8765]
Oct 01 13:09:33 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.hktmnz from compute-0 -- ports [8765]
Oct 01 13:09:33 compute-0 sudo[84287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:33 compute-0 sudo[84287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:33 compute-0 sudo[84287]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:33 compute-0 sudo[84312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:33 compute-0 sudo[84312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:33 compute-0 sudo[84312]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:33 compute-0 sudo[84337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:33 compute-0 sudo[84337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:33 compute-0 sudo[84337]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:33 compute-0 sudo[84362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --name mgr.compute-0.hktmnz --force --tcp-ports 8765
Oct 01 13:09:33 compute-0 sudo[84362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:34 compute-0 ceph-mgr[83054]: mgr[py] Loading python module 'devicehealth'
Oct 01 13:09:34 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.hktmnz for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:09:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 01 13:09:34 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701032644' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 13:09:34 compute-0 thirsty_snyder[84283]: 
Oct 01 13:09:34 compute-0 thirsty_snyder[84283]: {"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":93,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-01T13:07:57.318832+0000","services":{}},"progress_events":{}}
Oct 01 13:09:34 compute-0 systemd[1]: libpod-a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b.scope: Deactivated successfully.
Oct 01 13:09:34 compute-0 podman[84249]: 2025-10-01 13:09:34.277285163 +0000 UTC m=+0.733157741 container died a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a42c1ffa769ecc114547b6a2434fac0f21e788b6168aeb7f29d16b29c5fc12b-merged.mount: Deactivated successfully.
Oct 01 13:09:34 compute-0 podman[84249]: 2025-10-01 13:09:34.341030559 +0000 UTC m=+0.796903107 container remove a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b (image=quay.io/ceph/ceph:v18, name=thirsty_snyder, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:34 compute-0 systemd[1]: libpod-conmon-a1faac74fd6a8ccab441c92fdc12a495a32ac48133ddcb61a9989cc9d7d67f6b.scope: Deactivated successfully.
Oct 01 13:09:34 compute-0 sudo[84207]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:34 compute-0 ceph-mgr[83054]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 13:09:34 compute-0 ceph-mgr[83054]: mgr[py] Loading python module 'diskprediction_local'
Oct 01 13:09:34 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz[83050]: 2025-10-01T13:09:34.398+0000 7f903defc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 01 13:09:34 compute-0 podman[84491]: 2025-10-01 13:09:34.452215064 +0000 UTC m=+0.068647934 container died 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-792ec9ad31f2820bb23b202d09979320cdfe08a20feda4a81d09d135d98a5bda-merged.mount: Deactivated successfully.
Oct 01 13:09:34 compute-0 ceph-mon[74802]: Added host compute-0
Oct 01 13:09:34 compute-0 ceph-mon[74802]: Saving service mon spec with placement compute-0
Oct 01 13:09:34 compute-0 ceph-mon[74802]: Saving service mgr spec with placement compute-0
Oct 01 13:09:34 compute-0 ceph-mon[74802]: Marking host: compute-0 for OSDSpec preview refresh.
Oct 01 13:09:34 compute-0 ceph-mon[74802]: Saving service osd.default_drive_group spec with placement compute-0
Oct 01 13:09:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:09:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:34 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1701032644' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 13:09:34 compute-0 podman[84491]: 2025-10-01 13:09:34.496051011 +0000 UTC m=+0.112483881 container remove 91ef6674bb3b6e70c89642640876f7b25e443b951bfe34543b2826516351304b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:34 compute-0 bash[84491]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-hktmnz
Oct 01 13:09:34 compute-0 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mgr.compute-0.hktmnz.service: Main process exited, code=exited, status=143/n/a
Oct 01 13:09:34 compute-0 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mgr.compute-0.hktmnz.service: Failed with result 'exit-code'.
Oct 01 13:09:34 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.hktmnz for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:09:34 compute-0 systemd[1]: ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mgr.compute-0.hktmnz.service: Consumed 5.489s CPU time.
Oct 01 13:09:34 compute-0 systemd[1]: Reloading.
Oct 01 13:09:34 compute-0 systemd-rc-local-generator[84578]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:09:34 compute-0 systemd-sysv-generator[84583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:09:34 compute-0 sudo[84362]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:34 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.hktmnz
Oct 01 13:09:34 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.hktmnz
Oct 01 13:09:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"} v 0) v1
Oct 01 13:09:34 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"}]: dispatch
Oct 01 13:09:34 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"}]': finished
Oct 01 13:09:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 13:09:34 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:34 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev 8bc42142-0e21-440d-84c2-e86a31779c5d (Updating mgr deployment (-1 -> 1))
Oct 01 13:09:34 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event 8bc42142-0e21-440d-84c2-e86a31779c5d (Updating mgr deployment (-1 -> 1)) in 1 seconds
Oct 01 13:09:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct 01 13:09:34 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:34 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2ca47551-44fa-4e84-b634-1fbc4d606c4e does not exist
Oct 01 13:09:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:09:34 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:09:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:09:34 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:09:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:34 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:35 compute-0 sudo[84587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:35 compute-0 sudo[84587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:35 compute-0 sudo[84587]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:35 compute-0 sudo[84612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:35 compute-0 sudo[84612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:35 compute-0 sudo[84612]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:35 compute-0 sudo[84637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:35 compute-0 sudo[84637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:35 compute-0 sudo[84637]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:35 compute-0 sudo[84662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:09:35 compute-0 sudo[84662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:35 compute-0 podman[84727]: 2025-10-01 13:09:35.479575876 +0000 UTC m=+0.039034565 container create 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 13:09:35 compute-0 ceph-mon[74802]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:35 compute-0 ceph-mon[74802]: Removing daemon mgr.compute-0.hktmnz from compute-0 -- ports [8765]
Oct 01 13:09:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"}]: dispatch
Oct 01 13:09:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hktmnz"}]': finished
Oct 01 13:09:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:09:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:09:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:35 compute-0 systemd[1]: Started libpod-conmon-0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f.scope.
Oct 01 13:09:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:35 compute-0 podman[84727]: 2025-10-01 13:09:35.557856889 +0000 UTC m=+0.117315618 container init 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:35 compute-0 podman[84727]: 2025-10-01 13:09:35.463017562 +0000 UTC m=+0.022476281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:35 compute-0 podman[84727]: 2025-10-01 13:09:35.565272826 +0000 UTC m=+0.124731535 container start 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:09:35 compute-0 podman[84727]: 2025-10-01 13:09:35.569775553 +0000 UTC m=+0.129234242 container attach 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:09:35 compute-0 practical_rosalind[84743]: 167 167
Oct 01 13:09:35 compute-0 systemd[1]: libpod-0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f.scope: Deactivated successfully.
Oct 01 13:09:35 compute-0 podman[84727]: 2025-10-01 13:09:35.571273804 +0000 UTC m=+0.130732513 container died 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb640248e9072b01e437cc7ec14e54854062d3891eca68e101f047b01c1fdca4-merged.mount: Deactivated successfully.
Oct 01 13:09:35 compute-0 podman[84727]: 2025-10-01 13:09:35.615162795 +0000 UTC m=+0.174621484 container remove 0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 13:09:35 compute-0 systemd[1]: libpod-conmon-0d23e39916a5e4a24681f0bd67a5da0f5e632e8f5078cc3339c22d109f2eff9f.scope: Deactivated successfully.
Oct 01 13:09:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:35 compute-0 podman[84766]: 2025-10-01 13:09:35.794665054 +0000 UTC m=+0.039668693 container create f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 13:09:35 compute-0 systemd[1]: Started libpod-conmon-f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca.scope.
Oct 01 13:09:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:35 compute-0 podman[84766]: 2025-10-01 13:09:35.777038469 +0000 UTC m=+0.022042128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:35 compute-0 podman[84766]: 2025-10-01 13:09:35.883247865 +0000 UTC m=+0.128251514 container init f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:35 compute-0 podman[84766]: 2025-10-01 13:09:35.891099945 +0000 UTC m=+0.136103584 container start f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:35 compute-0 podman[84766]: 2025-10-01 13:09:35.894976964 +0000 UTC m=+0.139980633 container attach f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:09:36 compute-0 ceph-mon[74802]: Removing key for mgr.compute-0.hktmnz
Oct 01 13:09:36 compute-0 vibrant_antonelli[84782]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:09:36 compute-0 vibrant_antonelli[84782]: --> relative data size: 1.0
Oct 01 13:09:36 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 13:09:37 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982
Oct 01 13:09:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"} v 0) v1
Oct 01 13:09:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/972123675' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"}]: dispatch
Oct 01 13:09:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct 01 13:09:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:09:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/972123675' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"}]': finished
Oct 01 13:09:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct 01 13:09:37 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct 01 13:09:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:09:37 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:37 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:09:37 compute-0 ceph-mon[74802]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:37 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/972123675' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"}]: dispatch
Oct 01 13:09:37 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/972123675' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982"}]': finished
Oct 01 13:09:37 compute-0 ceph-mon[74802]: osdmap e4: 1 total, 0 up, 1 in
Oct 01 13:09:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:37 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 13:09:37 compute-0 lvm[84844]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 13:09:37 compute-0 lvm[84844]: VG ceph_vg0 finished
Oct 01 13:09:37 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 01 13:09:37 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 01 13:09:37 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 01 13:09:37 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:37 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 01 13:09:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:37 compute-0 ceph-mgr[75103]: [progress INFO root] Writing back 3 completed events
Oct 01 13:09:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 13:09:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 01 13:09:38 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256121332' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 13:09:38 compute-0 vibrant_antonelli[84782]:  stderr: got monmap epoch 1
Oct 01 13:09:38 compute-0 vibrant_antonelli[84782]: --> Creating keyring file for osd.0
Oct 01 13:09:38 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 01 13:09:38 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 01 13:09:38 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982 --setuser ceph --setgroup ceph
Oct 01 13:09:38 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 01 13:09:38 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 01 13:09:38 compute-0 ceph-mon[74802]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:38 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/256121332' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 13:09:38 compute-0 ceph-mon[74802]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 01 13:09:38 compute-0 ceph-mon[74802]: Cluster is now healthy
Oct 01 13:09:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:38.191+0000 7f3def11e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:38.191+0000 7f3def11e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:38.191+0000 7f3def11e740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:38.191+0000 7f3def11e740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 13:09:40 compute-0 ceph-mon[74802]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 13:09:40 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f5852bc7-e830-489a-b8a9-42dfbbe71426
Oct 01 13:09:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"} v 0) v1
Oct 01 13:09:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2920071575' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"}]: dispatch
Oct 01 13:09:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct 01 13:09:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:09:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2920071575' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"}]': finished
Oct 01 13:09:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct 01 13:09:41 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct 01 13:09:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:09:41 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:41 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:09:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:09:41 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:09:41 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 13:09:41 compute-0 lvm[85787]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 13:09:41 compute-0 lvm[85787]: VG ceph_vg1 finished
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 01 13:09:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 01 13:09:41 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1018595264' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]:  stderr: got monmap epoch 1
Oct 01 13:09:41 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2920071575' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"}]: dispatch
Oct 01 13:09:41 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2920071575' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426"}]': finished
Oct 01 13:09:41 compute-0 ceph-mon[74802]: osdmap e5: 2 total, 0 up, 2 in
Oct 01 13:09:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: --> Creating keyring file for osd.1
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 01 13:09:41 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid f5852bc7-e830-489a-b8a9-42dfbbe71426 --setuser ceph --setgroup ceph
Oct 01 13:09:42 compute-0 ceph-mon[74802]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:42 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1018595264' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 13:09:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:41.913+0000 7fc23c1b9740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:41.913+0000 7fc23c1b9740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:41.913+0000 7fc23c1b9740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:41.914+0000 7fc23c1b9740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c4c937e2-a8a8-47c3-af37-fdedb6fff1f9
Oct 01 13:09:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"} v 0) v1
Oct 01 13:09:44 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681149013' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"}]: dispatch
Oct 01 13:09:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct 01 13:09:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:09:44 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/681149013' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"}]': finished
Oct 01 13:09:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct 01 13:09:44 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct 01 13:09:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:09:44 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:09:44 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:09:44 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:09:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:09:44 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:09:44 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:09:44 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:09:44 compute-0 ceph-mon[74802]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:44 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/681149013' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"}]: dispatch
Oct 01 13:09:44 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/681149013' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9"}]': finished
Oct 01 13:09:44 compute-0 ceph-mon[74802]: osdmap e6: 3 total, 0 up, 3 in
Oct 01 13:09:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:09:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 01 13:09:44 compute-0 lvm[86730]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 13:09:44 compute-0 lvm[86730]: VG ceph_vg2 finished
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 01 13:09:44 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 01 13:09:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct 01 13:09:45 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446363946' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 13:09:45 compute-0 vibrant_antonelli[84782]:  stderr: got monmap epoch 1
Oct 01 13:09:45 compute-0 vibrant_antonelli[84782]: --> Creating keyring file for osd.2
Oct 01 13:09:45 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 01 13:09:45 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 01 13:09:45 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid c4c937e2-a8a8-47c3-af37-fdedb6fff1f9 --setuser ceph --setgroup ceph
Oct 01 13:09:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:45 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/446363946' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 01 13:09:46 compute-0 sshd-session[86804]: Invalid user superadmin from 156.236.31.46 port 43740
Oct 01 13:09:46 compute-0 sshd-session[86804]: Received disconnect from 156.236.31.46 port 43740:11: Bye Bye [preauth]
Oct 01 13:09:46 compute-0 sshd-session[86804]: Disconnected from invalid user superadmin 156.236.31.46 port 43740 [preauth]
Oct 01 13:09:46 compute-0 ceph-mon[74802]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:09:47
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [balancer INFO root] No pools available
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:09:47 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:45.454+0000 7fde61a30740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 13:09:47 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:45.455+0000 7fde61a30740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 13:09:47 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:45.455+0000 7fde61a30740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 01 13:09:47 compute-0 vibrant_antonelli[84782]:  stderr: 2025-10-01T13:09:45.455+0000 7fde61a30740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 01 13:09:47 compute-0 vibrant_antonelli[84782]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct 01 13:09:48 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 13:09:48 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 01 13:09:48 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 01 13:09:48 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 01 13:09:48 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 01 13:09:48 compute-0 vibrant_antonelli[84782]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 13:09:48 compute-0 vibrant_antonelli[84782]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 01 13:09:48 compute-0 vibrant_antonelli[84782]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct 01 13:09:48 compute-0 systemd[1]: libpod-f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca.scope: Deactivated successfully.
Oct 01 13:09:48 compute-0 systemd[1]: libpod-f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca.scope: Consumed 6.269s CPU time.
Oct 01 13:09:48 compute-0 podman[84766]: 2025-10-01 13:09:48.121474394 +0000 UTC m=+12.366478073 container died f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec5899fa019168fa672415cca67e6b988cab3bae6e4398bbb0e7cbacf867f763-merged.mount: Deactivated successfully.
Oct 01 13:09:48 compute-0 podman[84766]: 2025-10-01 13:09:48.189385737 +0000 UTC m=+12.434389376 container remove f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:48 compute-0 systemd[1]: libpod-conmon-f727f656b268e03a8c49914ecde46c83439d05432f409bfebf64914a269c82ca.scope: Deactivated successfully.
Oct 01 13:09:48 compute-0 sudo[84662]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:48 compute-0 sudo[87661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:48 compute-0 sudo[87661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:48 compute-0 sudo[87661]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:48 compute-0 sudo[87686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:48 compute-0 sudo[87686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:48 compute-0 sudo[87686]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:48 compute-0 sudo[87711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:48 compute-0 sudo[87711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:48 compute-0 sudo[87711]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:48 compute-0 sudo[87736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:09:48 compute-0 sudo[87736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:48 compute-0 podman[87800]: 2025-10-01 13:09:48.838905844 +0000 UTC m=+0.035472935 container create bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:48 compute-0 ceph-mon[74802]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:48 compute-0 systemd[1]: Started libpod-conmon-bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a.scope.
Oct 01 13:09:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:48 compute-0 podman[87800]: 2025-10-01 13:09:48.821064023 +0000 UTC m=+0.017631124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:48 compute-0 podman[87800]: 2025-10-01 13:09:48.926264841 +0000 UTC m=+0.122832012 container init bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:48 compute-0 podman[87800]: 2025-10-01 13:09:48.932543827 +0000 UTC m=+0.129110908 container start bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:48 compute-0 podman[87800]: 2025-10-01 13:09:48.936010864 +0000 UTC m=+0.132577975 container attach bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:48 compute-0 strange_bell[87817]: 167 167
Oct 01 13:09:48 compute-0 systemd[1]: libpod-bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a.scope: Deactivated successfully.
Oct 01 13:09:48 compute-0 podman[87800]: 2025-10-01 13:09:48.93695002 +0000 UTC m=+0.133517101 container died bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-354f7ca130eb7c07045434c5e04d3eaa1a54a108cd211521813017db378a3269-merged.mount: Deactivated successfully.
Oct 01 13:09:48 compute-0 podman[87800]: 2025-10-01 13:09:48.981075106 +0000 UTC m=+0.177642217 container remove bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:48 compute-0 systemd[1]: libpod-conmon-bd0a8a207fb9035ca5f04473ad07fbe285b0eb5ba40a855e9e0e056c5983038a.scope: Deactivated successfully.
Oct 01 13:09:49 compute-0 podman[87841]: 2025-10-01 13:09:49.192328905 +0000 UTC m=+0.061179135 container create 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:09:49 compute-0 systemd[1]: Started libpod-conmon-3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de.scope.
Oct 01 13:09:49 compute-0 podman[87841]: 2025-10-01 13:09:49.16861588 +0000 UTC m=+0.037466200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:49 compute-0 podman[87841]: 2025-10-01 13:09:49.280207026 +0000 UTC m=+0.149057266 container init 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 13:09:49 compute-0 podman[87841]: 2025-10-01 13:09:49.290361632 +0000 UTC m=+0.159211912 container start 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:49 compute-0 podman[87841]: 2025-10-01 13:09:49.295033462 +0000 UTC m=+0.163883732 container attach 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]: {
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:     "0": [
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:         {
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "devices": [
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "/dev/loop3"
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             ],
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_name": "ceph_lv0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_size": "21470642176",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "name": "ceph_lv0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "tags": {
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.cluster_name": "ceph",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.crush_device_class": "",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.encrypted": "0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.osd_id": "0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.type": "block",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.vdo": "0"
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             },
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "type": "block",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "vg_name": "ceph_vg0"
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:         }
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:     ],
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:     "1": [
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:         {
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "devices": [
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "/dev/loop4"
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             ],
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_name": "ceph_lv1",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_size": "21470642176",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "name": "ceph_lv1",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "tags": {
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.cluster_name": "ceph",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.crush_device_class": "",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.encrypted": "0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.osd_id": "1",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.type": "block",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.vdo": "0"
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             },
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "type": "block",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "vg_name": "ceph_vg1"
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:         }
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:     ],
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:     "2": [
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:         {
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "devices": [
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "/dev/loop5"
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             ],
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_name": "ceph_lv2",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_size": "21470642176",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "name": "ceph_lv2",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "tags": {
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.cluster_name": "ceph",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.crush_device_class": "",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.encrypted": "0",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.osd_id": "2",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.type": "block",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:                 "ceph.vdo": "0"
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             },
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "type": "block",
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:             "vg_name": "ceph_vg2"
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:         }
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]:     ]
Oct 01 13:09:50 compute-0 affectionate_mccarthy[87857]: }
Oct 01 13:09:50 compute-0 systemd[1]: libpod-3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de.scope: Deactivated successfully.
Oct 01 13:09:50 compute-0 podman[87866]: 2025-10-01 13:09:50.078372188 +0000 UTC m=+0.020545516 container died 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:09:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-867dbb708c40a32eb50f9a7d9bd67f8a9b37de6f08edef0784d07a0624977579-merged.mount: Deactivated successfully.
Oct 01 13:09:50 compute-0 podman[87866]: 2025-10-01 13:09:50.133679998 +0000 UTC m=+0.075853306 container remove 3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:50 compute-0 systemd[1]: libpod-conmon-3e22003c5ca29893d72ddd3845c63ee08e3d5c1e3c897b83f200e6305bb061de.scope: Deactivated successfully.
Oct 01 13:09:50 compute-0 sudo[87736]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct 01 13:09:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 01 13:09:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:50 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct 01 13:09:50 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct 01 13:09:50 compute-0 sudo[87881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:50 compute-0 sudo[87881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:50 compute-0 sudo[87881]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:50 compute-0 sudo[87906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:50 compute-0 sudo[87906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:50 compute-0 sudo[87906]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:50 compute-0 sudo[87931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:50 compute-0 sudo[87931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:50 compute-0 sudo[87931]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:50 compute-0 sudo[87956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:50 compute-0 sudo[87956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:50 compute-0 podman[88022]: 2025-10-01 13:09:50.80017817 +0000 UTC m=+0.039478147 container create ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:09:50 compute-0 systemd[1]: Started libpod-conmon-ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434.scope.
Oct 01 13:09:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:50 compute-0 podman[88022]: 2025-10-01 13:09:50.856395304 +0000 UTC m=+0.095695301 container init ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:50 compute-0 ceph-mon[74802]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 01 13:09:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:50 compute-0 ceph-mon[74802]: Deploying daemon osd.0 on compute-0
Oct 01 13:09:50 compute-0 podman[88022]: 2025-10-01 13:09:50.863608957 +0000 UTC m=+0.102908924 container start ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:09:50 compute-0 podman[88022]: 2025-10-01 13:09:50.867213958 +0000 UTC m=+0.106513925 container attach ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:50 compute-0 sharp_banzai[88038]: 167 167
Oct 01 13:09:50 compute-0 systemd[1]: libpod-ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434.scope: Deactivated successfully.
Oct 01 13:09:50 compute-0 podman[88022]: 2025-10-01 13:09:50.868412291 +0000 UTC m=+0.107712288 container died ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:09:50 compute-0 podman[88022]: 2025-10-01 13:09:50.782637528 +0000 UTC m=+0.021937535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-70362e1f5e6ee1f42638f140bce7cdc40cfffacce546b5ae8c2b67dce3069f24-merged.mount: Deactivated successfully.
Oct 01 13:09:50 compute-0 podman[88022]: 2025-10-01 13:09:50.908444673 +0000 UTC m=+0.147744640 container remove ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:09:50 compute-0 systemd[1]: libpod-conmon-ad81535b733d071e78bdf7f47c8283ae114fb5ce2439a48e9db7367c98146434.scope: Deactivated successfully.
Oct 01 13:09:51 compute-0 podman[88072]: 2025-10-01 13:09:51.161566504 +0000 UTC m=+0.050339812 container create a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:51 compute-0 systemd[1]: Started libpod-conmon-a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8.scope.
Oct 01 13:09:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:51 compute-0 podman[88072]: 2025-10-01 13:09:51.23781368 +0000 UTC m=+0.126587008 container init a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:51 compute-0 podman[88072]: 2025-10-01 13:09:51.147074318 +0000 UTC m=+0.035847646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:51 compute-0 podman[88072]: 2025-10-01 13:09:51.243849809 +0000 UTC m=+0.132623117 container start a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 13:09:51 compute-0 podman[88072]: 2025-10-01 13:09:51.247120001 +0000 UTC m=+0.135893339 container attach a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:09:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:51 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test[88088]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 01 13:09:51 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test[88088]:                             [--no-systemd] [--no-tmpfs]
Oct 01 13:09:51 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test[88088]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 01 13:09:51 compute-0 systemd[1]: libpod-a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8.scope: Deactivated successfully.
Oct 01 13:09:51 compute-0 podman[88072]: 2025-10-01 13:09:51.871950226 +0000 UTC m=+0.760723614 container died a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:09:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6da61eb0d72957d1a5f40cb978719b94ec7635fc31652c4a009a1d21ac010d6f-merged.mount: Deactivated successfully.
Oct 01 13:09:51 compute-0 podman[88072]: 2025-10-01 13:09:51.93097889 +0000 UTC m=+0.819752198 container remove a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:09:51 compute-0 systemd[1]: libpod-conmon-a6eb26ac3159156543170ac332121212a4d8982deed95dcd0bc85dd5cbb190d8.scope: Deactivated successfully.
Oct 01 13:09:52 compute-0 systemd[1]: Reloading.
Oct 01 13:09:52 compute-0 systemd-sysv-generator[88151]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:09:52 compute-0 systemd-rc-local-generator[88147]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:09:52 compute-0 systemd[1]: Reloading.
Oct 01 13:09:52 compute-0 systemd-rc-local-generator[88189]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:09:52 compute-0 systemd-sysv-generator[88193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:09:52 compute-0 systemd[1]: Starting Ceph osd.0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:09:52 compute-0 ceph-mon[74802]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:52 compute-0 podman[88242]: 2025-10-01 13:09:52.911153 +0000 UTC m=+0.047093801 container create 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:09:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:52 compute-0 podman[88242]: 2025-10-01 13:09:52.885852471 +0000 UTC m=+0.021793252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:52 compute-0 podman[88242]: 2025-10-01 13:09:52.998520307 +0000 UTC m=+0.134461148 container init 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:53 compute-0 podman[88242]: 2025-10-01 13:09:53.006986574 +0000 UTC m=+0.142927335 container start 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:53 compute-0 podman[88242]: 2025-10-01 13:09:53.010172424 +0000 UTC m=+0.146113185 container attach 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 13:09:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:53 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 13:09:53 compute-0 bash[88242]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 13:09:53 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 01 13:09:53 compute-0 bash[88242]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 01 13:09:54 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 01 13:09:54 compute-0 bash[88242]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 01 13:09:54 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 01 13:09:54 compute-0 bash[88242]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 01 13:09:54 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:54 compute-0 bash[88242]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:54 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 13:09:54 compute-0 bash[88242]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 01 13:09:54 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate[88257]: --> ceph-volume raw activate successful for osd ID: 0
Oct 01 13:09:54 compute-0 bash[88242]: --> ceph-volume raw activate successful for osd ID: 0
Oct 01 13:09:54 compute-0 systemd[1]: libpod-198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8.scope: Deactivated successfully.
Oct 01 13:09:54 compute-0 podman[88242]: 2025-10-01 13:09:54.080770267 +0000 UTC m=+1.216711028 container died 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:09:54 compute-0 systemd[1]: libpod-198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8.scope: Consumed 1.081s CPU time.
Oct 01 13:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7009417f2d1926e6265fbcf87850b21f3ab2a47bc4591a5c33434a107f77880-merged.mount: Deactivated successfully.
Oct 01 13:09:54 compute-0 podman[88242]: 2025-10-01 13:09:54.131287013 +0000 UTC m=+1.267227774 container remove 198e933602157243fd46a56ffc3160bdac505fc345b66a202338c85b29ae4bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:09:54 compute-0 podman[88436]: 2025-10-01 13:09:54.340570686 +0000 UTC m=+0.042331978 container create ae2fd024bf44a1d4ea40453594604d2abf1ab3318d6a6ce26a91042adf10e2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9245638004ebefcbc9b9c4a80430a81ac617a216acae561c4c232e147ee036/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:54 compute-0 podman[88436]: 2025-10-01 13:09:54.397355007 +0000 UTC m=+0.099116339 container init ae2fd024bf44a1d4ea40453594604d2abf1ab3318d6a6ce26a91042adf10e2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:09:54 compute-0 podman[88436]: 2025-10-01 13:09:54.410360021 +0000 UTC m=+0.112121323 container start ae2fd024bf44a1d4ea40453594604d2abf1ab3318d6a6ce26a91042adf10e2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:09:54 compute-0 bash[88436]: ae2fd024bf44a1d4ea40453594604d2abf1ab3318d6a6ce26a91042adf10e2e7
Oct 01 13:09:54 compute-0 podman[88436]: 2025-10-01 13:09:54.32576409 +0000 UTC m=+0.027525382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:54 compute-0 systemd[1]: Started Ceph osd.0 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:09:54 compute-0 ceph-osd[88455]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:09:54 compute-0 ceph-osd[88455]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 01 13:09:54 compute-0 ceph-osd[88455]: pidfile_write: ignore empty --pid-file
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b65506f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b65506f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b65506f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b655ea7800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b655ea7800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b655ea7800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b655ea7800 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 13:09:54 compute-0 sudo[87956]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct 01 13:09:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 01 13:09:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:54 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:54 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct 01 13:09:54 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct 01 13:09:54 compute-0 sudo[88468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:54 compute-0 sudo[88468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:54 compute-0 sudo[88468]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:54 compute-0 sudo[88493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:54 compute-0 sudo[88493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:54 compute-0 sudo[88493]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:54 compute-0 sudo[88518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:54 compute-0 sudo[88518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:54 compute-0 sudo[88518]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b65506f800 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 13:09:54 compute-0 sudo[88543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:54 compute-0 sudo[88543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:54 compute-0 ceph-osd[88455]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 01 13:09:54 compute-0 ceph-osd[88455]: load: jerasure load: lrc 
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:09:54 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 13:09:55 compute-0 podman[88618]: 2025-10-01 13:09:55.131215176 +0000 UTC m=+0.060967490 container create 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:55 compute-0 systemd[1]: Started libpod-conmon-96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12.scope.
Oct 01 13:09:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:55 compute-0 podman[88618]: 2025-10-01 13:09:55.098052406 +0000 UTC m=+0.027804800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:55 compute-0 podman[88618]: 2025-10-01 13:09:55.198153021 +0000 UTC m=+0.127905355 container init 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:09:55 compute-0 podman[88618]: 2025-10-01 13:09:55.209095338 +0000 UTC m=+0.138847662 container start 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:55 compute-0 podman[88618]: 2025-10-01 13:09:55.213062398 +0000 UTC m=+0.142814742 container attach 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:55 compute-0 systemd[1]: libpod-96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12.scope: Deactivated successfully.
Oct 01 13:09:55 compute-0 intelligent_lichterman[88634]: 167 167
Oct 01 13:09:55 compute-0 conmon[88634]: conmon 96e41487ec158ed5a2d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12.scope/container/memory.events
Oct 01 13:09:55 compute-0 podman[88618]: 2025-10-01 13:09:55.215560398 +0000 UTC m=+0.145312722 container died 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:09:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-99710fb479b037c90b340b1404799d9010cba1c7f0e0c144690a6d9ed272ffa1-merged.mount: Deactivated successfully.
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 13:09:55 compute-0 podman[88618]: 2025-10-01 13:09:55.255614541 +0000 UTC m=+0.185366895 container remove 96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 13:09:55 compute-0 systemd[1]: libpod-conmon-96e41487ec158ed5a2d6eed30602db74eaab0f840eb8fc803990e0e4475a3f12.scope: Deactivated successfully.
Oct 01 13:09:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:09:55 compute-0 ceph-mon[74802]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 01 13:09:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:55 compute-0 ceph-mon[74802]: Deploying daemon osd.1 on compute-0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 01 13:09:55 compute-0 ceph-osd[88455]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f28c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluefs mount
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluefs mount shared_bdev_used = 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: RocksDB version: 7.9.2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Git sha 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: DB SUMMARY
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: DB Session ID:  YR1W053FNRY3BI19KNCD
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: CURRENT file:  CURRENT
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                         Options.error_if_exists: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.create_if_missing: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                                     Options.env: 0x55b655ef9c70
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                                Options.info_log: 0x55b6550f68a0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                              Options.statistics: (nil)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.use_fsync: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                              Options.db_log_dir: 
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.write_buffer_manager: 0x55b656002460
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.unordered_write: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.row_cache: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                              Options.wal_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.two_write_queues: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.wal_compression: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.atomic_flush: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.max_background_jobs: 4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.max_background_compactions: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.max_subcompactions: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.max_open_files: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Compression algorithms supported:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kZSTD supported: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kXpressCompression supported: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kBZip2Compression supported: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kLZ4Compression supported: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kZlibCompression supported: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kSnappyCompression supported: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 podman[88669]: 2025-10-01 13:09:55.54790245 +0000 UTC m=+0.045003362 container create db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f6240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f6240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550f6240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4059e439-ad38-467a-9aae-938058dd7e0b
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195553643, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195553842, "job": 1, "event": "recovery_finished"}
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: freelist init
Oct 01 13:09:55 compute-0 ceph-osd[88455]: freelist _read_cfg
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluefs umount
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) close
Oct 01 13:09:55 compute-0 systemd[1]: Started libpod-conmon-db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4.scope.
Oct 01 13:09:55 compute-0 podman[88669]: 2025-10-01 13:09:55.526093618 +0000 UTC m=+0.023194530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:55 compute-0 podman[88669]: 2025-10-01 13:09:55.65713412 +0000 UTC m=+0.154235032 container init db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:55 compute-0 podman[88669]: 2025-10-01 13:09:55.67427378 +0000 UTC m=+0.171374672 container start db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:55 compute-0 podman[88669]: 2025-10-01 13:09:55.678429026 +0000 UTC m=+0.175529908 container attach db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bdev(0x55b655f29400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluefs mount
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluefs mount shared_bdev_used = 4718592
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: RocksDB version: 7.9.2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Git sha 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: DB SUMMARY
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: DB Session ID:  YR1W053FNRY3BI19KNCC
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: CURRENT file:  CURRENT
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                         Options.error_if_exists: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.create_if_missing: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                                     Options.env: 0x55b6560aa310
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                                Options.info_log: 0x55b6553bcf80
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                              Options.statistics: (nil)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.use_fsync: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                              Options.db_log_dir: 
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.write_buffer_manager: 0x55b6560026e0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.unordered_write: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.row_cache: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                              Options.wal_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.two_write_queues: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.wal_compression: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.atomic_flush: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.max_background_jobs: 4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.max_background_compactions: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.max_subcompactions: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.max_open_files: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Compression algorithms supported:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kZSTD supported: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kXpressCompression supported: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kBZip2Compression supported: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kLZ4Compression supported: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kZlibCompression supported: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         kSnappyCompression supported: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:           Options.merge_operator: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6550ed060)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6550e3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.compression: LZ4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.num_levels: 7
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4059e439-ad38-467a-9aae-938058dd7e0b
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195818398, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195823312, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324195, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4059e439-ad38-467a-9aae-938058dd7e0b", "db_session_id": "YR1W053FNRY3BI19KNCC", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195827314, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324195, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4059e439-ad38-467a-9aae-938058dd7e0b", "db_session_id": "YR1W053FNRY3BI19KNCC", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195830597, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324195, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4059e439-ad38-467a-9aae-938058dd7e0b", "db_session_id": "YR1W053FNRY3BI19KNCC", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324195832168, "job": 1, "event": "recovery_finished"}
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b655251c00
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: DB pointer 0x55b655feba00
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 01 13:09:55 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e3090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e3090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e3090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 13:09:55 compute-0 ceph-osd[88455]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 01 13:09:55 compute-0 ceph-osd[88455]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 01 13:09:55 compute-0 ceph-osd[88455]: _get_class not permitted to load lua
Oct 01 13:09:55 compute-0 ceph-osd[88455]: _get_class not permitted to load sdk
Oct 01 13:09:55 compute-0 ceph-osd[88455]: _get_class not permitted to load test_remote_reads
Oct 01 13:09:55 compute-0 ceph-osd[88455]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 01 13:09:55 compute-0 ceph-osd[88455]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 01 13:09:55 compute-0 ceph-osd[88455]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 01 13:09:55 compute-0 ceph-osd[88455]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 01 13:09:55 compute-0 ceph-osd[88455]: osd.0 0 load_pgs
Oct 01 13:09:55 compute-0 ceph-osd[88455]: osd.0 0 load_pgs opened 0 pgs
Oct 01 13:09:55 compute-0 ceph-osd[88455]: osd.0 0 log_to_monitors true
Oct 01 13:09:55 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0[88451]: 2025-10-01T13:09:55.860+0000 7fbad12bd740 -1 osd.0 0 log_to_monitors true
Oct 01 13:09:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct 01 13:09:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 01 13:09:56 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test[88879]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 01 13:09:56 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test[88879]:                             [--no-systemd] [--no-tmpfs]
Oct 01 13:09:56 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test[88879]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 01 13:09:56 compute-0 systemd[1]: libpod-db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4.scope: Deactivated successfully.
Oct 01 13:09:56 compute-0 podman[88669]: 2025-10-01 13:09:56.335072912 +0000 UTC m=+0.832173794 container died db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-78cdc6fee75bb72f77c548821f4fad4ec09138e28ad90dc09644606c525f9c65-merged.mount: Deactivated successfully.
Oct 01 13:09:56 compute-0 podman[88669]: 2025-10-01 13:09:56.399484746 +0000 UTC m=+0.896585638 container remove db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate-test, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:56 compute-0 systemd[1]: libpod-conmon-db4fb75c0490e1a7a116634512fb87775a9b06e9ecc7458f7338be9b1f6683b4.scope: Deactivated successfully.
Oct 01 13:09:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct 01 13:09:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:09:56 compute-0 ceph-mon[74802]: from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 01 13:09:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 01 13:09:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct 01 13:09:56 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct 01 13:09:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 01 13:09:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 13:09:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 01 13:09:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:09:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:56 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:09:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:09:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:09:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:09:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:09:56 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:09:56 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:09:56 compute-0 systemd[1]: Reloading.
Oct 01 13:09:56 compute-0 systemd-sysv-generator[89158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:09:56 compute-0 systemd-rc-local-generator[89154]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:09:56 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 01 13:09:56 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 01 13:09:57 compute-0 systemd[1]: Reloading.
Oct 01 13:09:57 compute-0 systemd-rc-local-generator[89194]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:09:57 compute-0 systemd-sysv-generator[89198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:09:57 compute-0 systemd[1]: Starting Ceph osd.1 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:09:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct 01 13:09:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:09:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 13:09:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct 01 13:09:57 compute-0 ceph-osd[88455]: osd.0 0 done with init, starting boot process
Oct 01 13:09:57 compute-0 ceph-osd[88455]: osd.0 0 start_boot
Oct 01 13:09:57 compute-0 ceph-osd[88455]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 01 13:09:57 compute-0 ceph-osd[88455]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 01 13:09:57 compute-0 ceph-osd[88455]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 01 13:09:57 compute-0 ceph-osd[88455]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 01 13:09:57 compute-0 ceph-osd[88455]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 01 13:09:57 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct 01 13:09:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:09:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:09:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:09:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:09:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:09:57 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:09:57 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:09:57 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:09:57 compute-0 ceph-mon[74802]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:57 compute-0 ceph-mon[74802]: from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 01 13:09:57 compute-0 ceph-mon[74802]: osdmap e7: 3 total, 0 up, 3 in
Oct 01 13:09:57 compute-0 ceph-mon[74802]: from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 13:09:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:09:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:09:57 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct 01 13:09:57 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:09:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:09:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:57 compute-0 podman[89259]: 2025-10-01 13:09:57.558631651 +0000 UTC m=+0.052224314 container create 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:09:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:09:57 compute-0 podman[89259]: 2025-10-01 13:09:57.530698618 +0000 UTC m=+0.024291311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:57 compute-0 podman[89259]: 2025-10-01 13:09:57.650828394 +0000 UTC m=+0.144421077 container init 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:57 compute-0 podman[89259]: 2025-10-01 13:09:57.655746002 +0000 UTC m=+0.149338685 container start 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:09:57 compute-0 podman[89259]: 2025-10-01 13:09:57.662944403 +0000 UTC m=+0.156537086 container attach 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:09:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:57 compute-0 sshd-session[89207]: Received disconnect from 200.7.101.139 port 43868:11: Bye Bye [preauth]
Oct 01 13:09:57 compute-0 sshd-session[89207]: Disconnected from authenticating user root 200.7.101.139 port 43868 [preauth]
Oct 01 13:09:58 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct 01 13:09:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:09:58 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:58 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:09:58 compute-0 ceph-mon[74802]: from='osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 13:09:58 compute-0 ceph-mon[74802]: osdmap e8: 3 total, 0 up, 3 in
Oct 01 13:09:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:09:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:09:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:58 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 13:09:58 compute-0 bash[89259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 13:09:58 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 01 13:09:58 compute-0 bash[89259]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct 01 13:09:58 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 01 13:09:58 compute-0 bash[89259]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct 01 13:09:58 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 01 13:09:58 compute-0 bash[89259]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct 01 13:09:58 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 01 13:09:58 compute-0 bash[89259]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct 01 13:09:58 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 13:09:58 compute-0 bash[89259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 01 13:09:58 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate[89275]: --> ceph-volume raw activate successful for osd ID: 1
Oct 01 13:09:58 compute-0 bash[89259]: --> ceph-volume raw activate successful for osd ID: 1
Oct 01 13:09:58 compute-0 systemd[1]: libpod-016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c.scope: Deactivated successfully.
Oct 01 13:09:58 compute-0 podman[89259]: 2025-10-01 13:09:58.774512134 +0000 UTC m=+1.268104847 container died 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:09:58 compute-0 systemd[1]: libpod-016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c.scope: Consumed 1.137s CPU time.
Oct 01 13:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eb49125b2576ca0f5273994c8991553323da78be411fc88afddf210c26b2c0c-merged.mount: Deactivated successfully.
Oct 01 13:09:58 compute-0 podman[89259]: 2025-10-01 13:09:58.911613175 +0000 UTC m=+1.405205838 container remove 016c6f455591986dcec188a97f971ee3d40e3c0a7f475adfc9ace1cdcb69920c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:09:59 compute-0 podman[89464]: 2025-10-01 13:09:59.202520645 +0000 UTC m=+0.070335712 container create c7bfaf4b1718864b8faf9e181463d9e2a2396c41a70f89cff77bf291b929f198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:09:59 compute-0 podman[89464]: 2025-10-01 13:09:59.158279226 +0000 UTC m=+0.026094353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0dd9c558c70587fac387e9e5c0564bf916306cdecdd498dc936e067251c290/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:09:59 compute-0 podman[89464]: 2025-10-01 13:09:59.304477482 +0000 UTC m=+0.172292549 container init c7bfaf4b1718864b8faf9e181463d9e2a2396c41a70f89cff77bf291b929f198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:09:59 compute-0 podman[89464]: 2025-10-01 13:09:59.314496362 +0000 UTC m=+0.182311399 container start c7bfaf4b1718864b8faf9e181463d9e2a2396c41a70f89cff77bf291b929f198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:09:59 compute-0 bash[89464]: c7bfaf4b1718864b8faf9e181463d9e2a2396c41a70f89cff77bf291b929f198
Oct 01 13:09:59 compute-0 systemd[1]: Started Ceph osd.1 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:09:59 compute-0 ceph-osd[89484]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:09:59 compute-0 ceph-osd[89484]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 01 13:09:59 compute-0 ceph-osd[89484]: pidfile_write: ignore empty --pid-file
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dbd99800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dbd99800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dbd99800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dcbdb800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dcbdb800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dcbdb800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dcbdb800 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 13:09:59 compute-0 sudo[88543]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:09:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:09:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct 01 13:09:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 01 13:09:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:09:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:59 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct 01 13:09:59 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct 01 13:09:59 compute-0 sudo[89497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:59 compute-0 sudo[89497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:59 compute-0 sudo[89497]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:59 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct 01 13:09:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:09:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:59 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:09:59 compute-0 ceph-mon[74802]: purged_snaps scrub starts
Oct 01 13:09:59 compute-0 ceph-mon[74802]: purged_snaps scrub ok
Oct 01 13:09:59 compute-0 ceph-mon[74802]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:09:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 01 13:09:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:09:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:09:59 compute-0 sudo[89522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:09:59 compute-0 sudo[89522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:59 compute-0 sudo[89522]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:59 compute-0 sudo[89547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:09:59 compute-0 sudo[89547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:59 compute-0 sudo[89547]: pam_unix(sudo:session): session closed for user root
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dbd99800 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 13:09:59 compute-0 sudo[89572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:09:59 compute-0 sudo[89572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:09:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:09:59 compute-0 ceph-osd[89484]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 01 13:09:59 compute-0 ceph-osd[89484]: load: jerasure load: lrc 
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:09:59 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 13:10:00 compute-0 podman[89645]: 2025-10-01 13:10:00.033441783 +0000 UTC m=+0.054615380 container create 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:10:00 compute-0 systemd[1]: Started libpod-conmon-33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c.scope.
Oct 01 13:10:00 compute-0 podman[89645]: 2025-10-01 13:10:00.001361555 +0000 UTC m=+0.022535172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:00 compute-0 podman[89645]: 2025-10-01 13:10:00.114803953 +0000 UTC m=+0.135977580 container init 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:00 compute-0 podman[89645]: 2025-10-01 13:10:00.121023547 +0000 UTC m=+0.142197144 container start 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:10:00 compute-0 pensive_jones[89661]: 167 167
Oct 01 13:10:00 compute-0 systemd[1]: libpod-33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c.scope: Deactivated successfully.
Oct 01 13:10:00 compute-0 conmon[89661]: conmon 33d62683fe20605dcd7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c.scope/container/memory.events
Oct 01 13:10:00 compute-0 podman[89645]: 2025-10-01 13:10:00.132692905 +0000 UTC m=+0.153866502 container attach 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:00 compute-0 podman[89645]: 2025-10-01 13:10:00.133528808 +0000 UTC m=+0.154702405 container died 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 13:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-85ebfcc8b4935e0c911a2f46f05c4d599cde7dec91f38e9b59b8692e06dbb6aa-merged.mount: Deactivated successfully.
Oct 01 13:10:00 compute-0 podman[89645]: 2025-10-01 13:10:00.243954331 +0000 UTC m=+0.265127948 container remove 33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jones, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:00 compute-0 systemd[1]: libpod-conmon-33d62683fe20605dcd7cfc2106b0367ca85e53e94060505b6123bd5e1ca7273c.scope: Deactivated successfully.
Oct 01 13:10:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:00 compute-0 ceph-osd[89484]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 01 13:10:00 compute-0 ceph-osd[89484]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluefs mount
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluefs mount shared_bdev_used = 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: RocksDB version: 7.9.2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Git sha 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: DB SUMMARY
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: DB Session ID:  4TQUBN3XRRRFZHEOXA8H
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: CURRENT file:  CURRENT
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                         Options.error_if_exists: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.create_if_missing: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                                     Options.env: 0x55f3dcc2dce0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                                Options.info_log: 0x55f3dbe208a0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                              Options.statistics: (nil)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.use_fsync: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                              Options.db_log_dir: 
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.write_buffer_manager: 0x55f3dcd36460
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.unordered_write: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.row_cache: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                              Options.wal_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.two_write_queues: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.wal_compression: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.atomic_flush: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.max_background_jobs: 4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.max_background_compactions: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.max_subcompactions: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.max_open_files: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Compression algorithms supported:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kZSTD supported: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kXpressCompression supported: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kBZip2Compression supported: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kLZ4Compression supported: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kZlibCompression supported: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kSnappyCompression supported: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe202c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:10:00 compute-0 podman[89698]: 2025-10-01 13:10:00.477574107 +0000 UTC m=+0.046274918 container create 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200481995, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200482181, "job": 1, "event": "recovery_finished"}
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: freelist init
Oct 01 13:10:00 compute-0 ceph-osd[89484]: freelist _read_cfg
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluefs umount
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) close
Oct 01 13:10:00 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct 01 13:10:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:10:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:10:00 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:10:00 compute-0 systemd[1]: Started libpod-conmon-68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118.scope.
Oct 01 13:10:00 compute-0 ceph-mon[74802]: Deploying daemon osd.2 on compute-0
Oct 01 13:10:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:10:00 compute-0 podman[89698]: 2025-10-01 13:10:00.454347336 +0000 UTC m=+0.023048177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:00 compute-0 podman[89698]: 2025-10-01 13:10:00.588623787 +0000 UTC m=+0.157324659 container init 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:10:00 compute-0 podman[89698]: 2025-10-01 13:10:00.60191709 +0000 UTC m=+0.170617911 container start 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:00 compute-0 podman[89698]: 2025-10-01 13:10:00.621723255 +0000 UTC m=+0.190424086 container attach 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bdev(0x55f3dcc5d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluefs mount
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluefs mount shared_bdev_used = 4718592
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: RocksDB version: 7.9.2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Git sha 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: DB SUMMARY
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: DB Session ID:  4TQUBN3XRRRFZHEOXA8G
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: CURRENT file:  CURRENT
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                         Options.error_if_exists: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.create_if_missing: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                                     Options.env: 0x55f3dcdde460
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                                Options.info_log: 0x55f3dbe20600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                              Options.statistics: (nil)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.use_fsync: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                              Options.db_log_dir: 
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.write_buffer_manager: 0x55f3dcd366e0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.unordered_write: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.row_cache: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                              Options.wal_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.two_write_queues: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.wal_compression: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.atomic_flush: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.max_background_jobs: 4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.max_background_compactions: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.max_subcompactions: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.max_open_files: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Compression algorithms supported:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kZSTD supported: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kXpressCompression supported: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kBZip2Compression supported: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kLZ4Compression supported: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kZlibCompression supported: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         kSnappyCompression supported: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dbe20a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dc0e6d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dc0e6d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f3dc0e6d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f3dbe0d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200781843, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200787867, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324200, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b", "db_session_id": "4TQUBN3XRRRFZHEOXA8G", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200795082, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324200, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b", "db_session_id": "4TQUBN3XRRRFZHEOXA8G", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200801634, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324200, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5b5e924d-74e5-4a0d-a2ac-d31876a6fa2b", "db_session_id": "4TQUBN3XRRRFZHEOXA8G", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324200803284, "job": 1, "event": "recovery_finished"}
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f3dcdebc00
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: DB pointer 0x55f3dcd1fa00
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 01 13:10:00 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 13:10:00 compute-0 ceph-osd[89484]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 01 13:10:00 compute-0 ceph-osd[89484]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 01 13:10:00 compute-0 ceph-osd[89484]: _get_class not permitted to load lua
Oct 01 13:10:00 compute-0 ceph-osd[89484]: _get_class not permitted to load sdk
Oct 01 13:10:00 compute-0 ceph-osd[89484]: _get_class not permitted to load test_remote_reads
Oct 01 13:10:00 compute-0 ceph-osd[89484]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 01 13:10:00 compute-0 ceph-osd[89484]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 01 13:10:00 compute-0 ceph-osd[89484]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 01 13:10:00 compute-0 ceph-osd[89484]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 01 13:10:00 compute-0 ceph-osd[89484]: osd.1 0 load_pgs
Oct 01 13:10:00 compute-0 ceph-osd[89484]: osd.1 0 load_pgs opened 0 pgs
Oct 01 13:10:00 compute-0 ceph-osd[89484]: osd.1 0 log_to_monitors true
Oct 01 13:10:00 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1[89480]: 2025-10-01T13:10:00.861+0000 7f5cc0ed4740 -1 osd.1 0 log_to_monitors true
Oct 01 13:10:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct 01 13:10:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.477 iops: 8314.022 elapsed_sec: 0.361
Oct 01 13:10:01 compute-0 ceph-osd[88455]: log_channel(cluster) log [WRN] : OSD bench result of 8314.022192 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 0 waiting for initial osdmap
Oct 01 13:10:01 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0[88451]: 2025-10-01T13:10:01.039+0000 7fbacd23d640 -1 osd.0 0 waiting for initial osdmap
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 01 13:10:01 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-0[88451]: 2025-10-01T13:10:01.069+0000 7fbac8865640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 01 13:10:01 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test[89909]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 01 13:10:01 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test[89909]:                             [--no-systemd] [--no-tmpfs]
Oct 01 13:10:01 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test[89909]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 01 13:10:01 compute-0 systemd[1]: libpod-68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118.scope: Deactivated successfully.
Oct 01 13:10:01 compute-0 podman[89698]: 2025-10-01 13:10:01.262773074 +0000 UTC m=+0.831473885 container died 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3424bb5d0e0393fedd91de57cbc9ae2685935944aba8845ca316350c39afffb0-merged.mount: Deactivated successfully.
Oct 01 13:10:01 compute-0 podman[89698]: 2025-10-01 13:10:01.3257896 +0000 UTC m=+0.894490411 container remove 68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:10:01 compute-0 systemd[1]: libpod-conmon-68c6395253aabec6af769216dd27afe003ca37b578d69d5c03c33965e7b37118.scope: Deactivated successfully.
Oct 01 13:10:01 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1067828362; not ready for session (expect reconnect)
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:10:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:10:01 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct 01 13:10:01 compute-0 systemd[1]: Reloading.
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:10:01 compute-0 ceph-mon[74802]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:10:01 compute-0 ceph-mon[74802]: from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 01 13:10:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:10:01 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct 01 13:10:01 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362] boot
Oct 01 13:10:01 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 01 13:10:01 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct 01 13:10:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:10:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:01 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:10:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:01 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:01 compute-0 systemd-rc-local-generator[90189]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:10:01 compute-0 ceph-osd[88455]: osd.0 9 state: booting -> active
Oct 01 13:10:01 compute-0 systemd-sysv-generator[90193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:10:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:10:01 compute-0 ceph-mgr[75103]: [devicehealth INFO root] creating mgr pool
Oct 01 13:10:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct 01 13:10:01 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 01 13:10:01 compute-0 systemd[1]: Reloading.
Oct 01 13:10:01 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 01 13:10:01 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 01 13:10:01 compute-0 systemd-sysv-generator[90237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:10:01 compute-0 systemd-rc-local-generator[90234]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:10:02 compute-0 systemd[1]: Starting Ceph osd.2 for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:10:02 compute-0 podman[90289]: 2025-10-01 13:10:02.397200316 +0000 UTC m=+0.081653189 container create bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:02 compute-0 podman[90289]: 2025-10-01 13:10:02.336182516 +0000 UTC m=+0.020635429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct 01 13:10:02 compute-0 podman[90289]: 2025-10-01 13:10:02.615182893 +0000 UTC m=+0.299635816 container init bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:10:02 compute-0 podman[90289]: 2025-10-01 13:10:02.625680416 +0000 UTC m=+0.310133289 container start bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:02 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 13:10:02 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct 01 13:10:02 compute-0 ceph-osd[89484]: osd.1 0 done with init, starting boot process
Oct 01 13:10:02 compute-0 ceph-osd[89484]: osd.1 0 start_boot
Oct 01 13:10:02 compute-0 ceph-osd[89484]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 01 13:10:02 compute-0 ceph-osd[89484]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 01 13:10:02 compute-0 ceph-osd[89484]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 01 13:10:02 compute-0 ceph-osd[89484]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 01 13:10:02 compute-0 ceph-osd[89484]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct 01 13:10:02 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct 01 13:10:02 compute-0 podman[90289]: 2025-10-01 13:10:02.709333961 +0000 UTC m=+0.393786834 container attach bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:10:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:02 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:10:02 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct 01 13:10:02 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 01 13:10:02 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4245549462; not ready for session (expect reconnect)
Oct 01 13:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:10:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:02 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:10:02 compute-0 ceph-mon[74802]: OSD bench result of 8314.022192 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 13:10:02 compute-0 ceph-mon[74802]: from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 01 13:10:02 compute-0 ceph-mon[74802]: osd.0 [v2:192.168.122.100:6802/1067828362,v1:192.168.122.100:6803/1067828362] boot
Oct 01 13:10:02 compute-0 ceph-mon[74802]: osdmap e9: 3 total, 1 up, 3 in
Oct 01 13:10:02 compute-0 ceph-mon[74802]: from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 13:10:02 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 01 13:10:02 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:02 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:02 compute-0 ceph-mon[74802]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 01 13:10:02 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 01 13:10:02 compute-0 ceph-osd[88455]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 01 13:10:02 compute-0 ceph-osd[88455]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 01 13:10:02 compute-0 ceph-osd[88455]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 01 13:10:03 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 13:10:03 compute-0 bash[90289]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 13:10:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct 01 13:10:03 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 01 13:10:03 compute-0 bash[90289]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct 01 13:10:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 01 13:10:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct 01 13:10:03 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct 01 13:10:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:10:03 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:03 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:03 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:10:03 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 01 13:10:03 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 01 13:10:03 compute-0 bash[90289]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct 01 13:10:03 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 01 13:10:03 compute-0 bash[90289]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct 01 13:10:03 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 01 13:10:03 compute-0 bash[90289]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct 01 13:10:03 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 13:10:03 compute-0 bash[90289]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 01 13:10:03 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate[90304]: --> ceph-volume raw activate successful for osd ID: 2
Oct 01 13:10:03 compute-0 bash[90289]: --> ceph-volume raw activate successful for osd ID: 2
Oct 01 13:10:03 compute-0 systemd[1]: libpod-bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172.scope: Deactivated successfully.
Oct 01 13:10:03 compute-0 podman[90289]: 2025-10-01 13:10:03.781077876 +0000 UTC m=+1.465530749 container died bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:03 compute-0 systemd[1]: libpod-bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172.scope: Consumed 1.170s CPU time.
Oct 01 13:10:03 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4245549462; not ready for session (expect reconnect)
Oct 01 13:10:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:10:03 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:03 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0129ae11dfcb3d9369d83e539fc90af356aa3d1fbc5fb640e17b882d1ecc2589-merged.mount: Deactivated successfully.
Oct 01 13:10:03 compute-0 podman[90289]: 2025-10-01 13:10:03.923920318 +0000 UTC m=+1.608373191 container remove bff328ae7f358ca132a5294b8147314c7390c0af96045efe80b3baec4a384172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:10:03 compute-0 ceph-mon[74802]: purged_snaps scrub starts
Oct 01 13:10:03 compute-0 ceph-mon[74802]: purged_snaps scrub ok
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 01 13:10:03 compute-0 ceph-mon[74802]: osdmap e10: 3 total, 1 up, 3 in
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 01 13:10:03 compute-0 ceph-mon[74802]: osdmap e11: 3 total, 1 up, 3 in
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:04 compute-0 podman[90481]: 2025-10-01 13:10:04.14390752 +0000 UTC m=+0.065915657 container create 1866f3a29a4e666710af5960850cff6901ef9d11df821255375d1e2347c28bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:04 compute-0 podman[90481]: 2025-10-01 13:10:04.113612732 +0000 UTC m=+0.035620849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec0bbd1b7b12faba5ddd31b24d1f30dfa9510d5aefd8e553e29167148083fb5e/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:04 compute-0 podman[90481]: 2025-10-01 13:10:04.272140803 +0000 UTC m=+0.194148940 container init 1866f3a29a4e666710af5960850cff6901ef9d11df821255375d1e2347c28bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:04 compute-0 podman[90481]: 2025-10-01 13:10:04.279180771 +0000 UTC m=+0.201188908 container start 1866f3a29a4e666710af5960850cff6901ef9d11df821255375d1e2347c28bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:10:04 compute-0 bash[90481]: 1866f3a29a4e666710af5960850cff6901ef9d11df821255375d1e2347c28bac
Oct 01 13:10:04 compute-0 systemd[1]: Started Ceph osd.2 for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:10:04 compute-0 ceph-osd[90500]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:10:04 compute-0 ceph-osd[90500]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 01 13:10:04 compute-0 ceph-osd[90500]: pidfile_write: ignore empty --pid-file
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1adb13800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1adb13800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1adb13800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1ae94b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 13:10:04 compute-0 sudo[89572]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1ae94b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1ae94b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1ae94b800 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 13:10:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:04 compute-0 sudo[90513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:04 compute-0 sudo[90513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:04 compute-0 sudo[90513]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:04 compute-0 sudo[90560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqlcuaxtfvgzuesfjavyssnkdoemznpu ; /usr/bin/python3'
Oct 01 13:10:04 compute-0 sudo[90560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:04 compute-0 sudo[90562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:04 compute-0 sudo[90562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:04 compute-0 sudo[90562]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1adb13800 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 13:10:04 compute-0 sudo[90589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:04 compute-0 sudo[90589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:04 compute-0 sudo[90589]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:04 compute-0 python3[90571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:04 compute-0 sudo[90616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:10:04 compute-0 sudo[90616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:04 compute-0 podman[90639]: 2025-10-01 13:10:04.799020494 +0000 UTC m=+0.079300202 container create 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 13:10:04 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4245549462; not ready for session (expect reconnect)
Oct 01 13:10:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:10:04 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:04 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:10:04 compute-0 podman[90639]: 2025-10-01 13:10:04.748076696 +0000 UTC m=+0.028356424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:04 compute-0 systemd[1]: Started libpod-conmon-396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b.scope.
Oct 01 13:10:04 compute-0 ceph-osd[90500]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 01 13:10:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:04 compute-0 ceph-osd[90500]: load: jerasure load: lrc 
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:10:04 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aa098f4096610e5bf1d23f300d9793601dc98130d353e004a320b4d2b16be1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aa098f4096610e5bf1d23f300d9793601dc98130d353e004a320b4d2b16be1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7aa098f4096610e5bf1d23f300d9793601dc98130d353e004a320b4d2b16be1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:04 compute-0 podman[90639]: 2025-10-01 13:10:04.912450312 +0000 UTC m=+0.192730050 container init 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:10:04 compute-0 podman[90639]: 2025-10-01 13:10:04.921564057 +0000 UTC m=+0.201843765 container start 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:10:04 compute-0 podman[90639]: 2025-10-01 13:10:04.933552673 +0000 UTC m=+0.213832371 container attach 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:05 compute-0 podman[90707]: 2025-10-01 13:10:05.139122552 +0000 UTC m=+0.063231263 container create 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 13:10:05 compute-0 systemd[1]: Started libpod-conmon-73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8.scope.
Oct 01 13:10:05 compute-0 podman[90707]: 2025-10-01 13:10:05.102116365 +0000 UTC m=+0.026225066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:05 compute-0 podman[90707]: 2025-10-01 13:10:05.237152249 +0000 UTC m=+0.161261000 container init 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:05 compute-0 podman[90707]: 2025-10-01 13:10:05.246252633 +0000 UTC m=+0.170361304 container start 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:10:05 compute-0 laughing_noyce[90727]: 167 167
Oct 01 13:10:05 compute-0 systemd[1]: libpod-73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8.scope: Deactivated successfully.
Oct 01 13:10:05 compute-0 podman[90707]: 2025-10-01 13:10:05.259060182 +0000 UTC m=+0.183168863 container attach 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:10:05 compute-0 podman[90707]: 2025-10-01 13:10:05.259415902 +0000 UTC m=+0.183524573 container died 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3186858516f42c3227d00219d0b46e80f3efccf1556d8b8b8711b8a8d942ece-merged.mount: Deactivated successfully.
Oct 01 13:10:05 compute-0 podman[90707]: 2025-10-01 13:10:05.324721931 +0000 UTC m=+0.248830612 container remove 73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_noyce, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:05 compute-0 systemd[1]: libpod-conmon-73f3f7bd202adf62de61d0023c363f6e19c8fe954fbd9428061ed25446ebc9c8.scope: Deactivated successfully.
Oct 01 13:10:05 compute-0 ceph-mon[74802]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 01 13:10:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:05 compute-0 ceph-osd[90500]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 01 13:10:05 compute-0 ceph-osd[90500]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9ccc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluefs mount
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluefs mount shared_bdev_used = 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: RocksDB version: 7.9.2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Git sha 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: DB SUMMARY
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: DB Session ID:  7GQH8GJG7ZFW7CY52MVW
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: CURRENT file:  CURRENT
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                         Options.error_if_exists: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.create_if_missing: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                                     Options.env: 0x55b1ae99dc70
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                                Options.info_log: 0x55b1adb9a8a0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                              Options.statistics: (nil)
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.use_fsync: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                              Options.db_log_dir: 
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.write_buffer_manager: 0x55b1aeab0460
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.unordered_write: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.row_cache: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                              Options.wal_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.two_write_queues: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.wal_compression: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.atomic_flush: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.max_background_jobs: 4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.max_background_compactions: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.max_subcompactions: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.max_open_files: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Compression algorithms supported:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kZSTD supported: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kXpressCompression supported: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kBZip2Compression supported: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kLZ4Compression supported: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kZlibCompression supported: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kSnappyCompression supported: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb87090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb87090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb87090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2133678-e23b-4ce6-a6b1-f49e8e1c0754
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205479113, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205479390, "job": 1, "event": "recovery_finished"}
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: freelist init
Oct 01 13:10:05 compute-0 ceph-osd[90500]: freelist _read_cfg
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluefs umount
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) close
Oct 01 13:10:05 compute-0 podman[90780]: 2025-10-01 13:10:05.515702922 +0000 UTC m=+0.044293612 container create 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 13:10:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 01 13:10:05 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2715268222' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 13:10:05 compute-0 naughty_curie[90661]: 
Oct 01 13:10:05 compute-0 naughty_curie[90661]: {"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":125,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":11,"num_osds":3,"num_up_osds":1,"osd_up_since":1759324201,"num_in_osds":3,"osd_in_since":1759324184,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":446984192,"bytes_avail":21023657984,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T13:09:49.717098+0000","services":{}},"progress_events":{}}
Oct 01 13:10:05 compute-0 podman[90639]: 2025-10-01 13:10:05.561531436 +0000 UTC m=+0.841811124 container died 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:10:05 compute-0 systemd[1]: Started libpod-conmon-8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba.scope.
Oct 01 13:10:05 compute-0 systemd[1]: libpod-396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b.scope: Deactivated successfully.
Oct 01 13:10:05 compute-0 podman[90780]: 2025-10-01 13:10:05.501177095 +0000 UTC m=+0.029767765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7aa098f4096610e5bf1d23f300d9793601dc98130d353e004a320b4d2b16be1-merged.mount: Deactivated successfully.
Oct 01 13:10:05 compute-0 podman[90780]: 2025-10-01 13:10:05.642544206 +0000 UTC m=+0.171134876 container init 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct 01 13:10:05 compute-0 podman[90780]: 2025-10-01 13:10:05.651400384 +0000 UTC m=+0.179991054 container start 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:05 compute-0 podman[90780]: 2025-10-01 13:10:05.666794445 +0000 UTC m=+0.195385165 container attach 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:10:05 compute-0 podman[90639]: 2025-10-01 13:10:05.674331937 +0000 UTC m=+0.954611645 container remove 396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b (image=quay.io/ceph/ceph:v18, name=naughty_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:10:05 compute-0 systemd[1]: libpod-conmon-396acdcc81a9825419b107255b1ea7f13aab4fad1113dfb9c368d217ba0b8c9b.scope: Deactivated successfully.
Oct 01 13:10:05 compute-0 sudo[90560]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bdev(0x55b1ae9cd400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluefs mount
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 01 13:10:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluefs mount shared_bdev_used = 4718592
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: RocksDB version: 7.9.2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Git sha 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: DB SUMMARY
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: DB Session ID:  7GQH8GJG7ZFW7CY52MVX
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: CURRENT file:  CURRENT
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: IDENTITY file:  IDENTITY
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                         Options.error_if_exists: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.create_if_missing: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                         Options.paranoid_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                                     Options.env: 0x55b1aeb58460
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                                Options.info_log: 0x55b1adb9a600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_file_opening_threads: 16
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                              Options.statistics: (nil)
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.use_fsync: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.max_log_file_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                         Options.allow_fallocate: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.use_direct_reads: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.create_missing_column_families: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                              Options.db_log_dir: 
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                                 Options.wal_dir: db.wal
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.advise_random_on_open: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.write_buffer_manager: 0x55b1aeab0460
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                            Options.rate_limiter: (nil)
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.unordered_write: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.row_cache: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                              Options.wal_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.allow_ingest_behind: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.two_write_queues: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.manual_wal_flush: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.wal_compression: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.atomic_flush: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.log_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.allow_data_in_errors: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.db_host_id: __hostname__
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.max_background_jobs: 4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.max_background_compactions: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.max_subcompactions: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.max_open_files: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.bytes_per_sync: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.max_background_flushes: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Compression algorithms supported:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kZSTD supported: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kXpressCompression supported: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kBZip2Compression supported: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kLZ4Compression supported: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kZlibCompression supported: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kLZ4HCCompression supported: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         kSnappyCompression supported: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb871f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb87090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb87090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:           Options.merge_operator: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.compaction_filter_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.sst_partitioner_factory: None
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b1adb9a380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b1adb87090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.write_buffer_size: 16777216
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.max_write_buffer_number: 64
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.compression: LZ4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.num_levels: 7
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.level: 32767
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.compression_opts.strategy: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                  Options.compression_opts.enabled: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.arena_block_size: 1048576
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.disable_auto_compactions: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.inplace_update_support: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.bloom_locality: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                    Options.max_successive_merges: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.paranoid_file_checks: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.force_consistency_checks: 1
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.report_bg_io_stats: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                               Options.ttl: 2592000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                       Options.enable_blob_files: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                           Options.min_blob_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                          Options.blob_file_size: 268435456
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb:                Options.blob_file_starting_level: 0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2133678-e23b-4ce6-a6b1-f49e8e1c0754
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205753956, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205759316, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324205, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2133678-e23b-4ce6-a6b1-f49e8e1c0754", "db_session_id": "7GQH8GJG7ZFW7CY52MVX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205761564, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324205, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2133678-e23b-4ce6-a6b1-f49e8e1c0754", "db_session_id": "7GQH8GJG7ZFW7CY52MVX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205763648, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324205, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2133678-e23b-4ce6-a6b1-f49e8e1c0754", "db_session_id": "7GQH8GJG7ZFW7CY52MVX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324205768540, "job": 1, "event": "recovery_finished"}
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b1adcf4000
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: DB pointer 0x55b1aea8fa00
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 01 13:10:05 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb87090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb87090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb87090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 13:10:05 compute-0 ceph-osd[90500]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 01 13:10:05 compute-0 ceph-osd[90500]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 01 13:10:05 compute-0 ceph-osd[90500]: _get_class not permitted to load lua
Oct 01 13:10:05 compute-0 ceph-osd[90500]: _get_class not permitted to load sdk
Oct 01 13:10:05 compute-0 ceph-osd[90500]: _get_class not permitted to load test_remote_reads
Oct 01 13:10:05 compute-0 ceph-osd[90500]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 01 13:10:05 compute-0 ceph-osd[90500]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 01 13:10:05 compute-0 ceph-osd[90500]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 01 13:10:05 compute-0 ceph-osd[90500]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 01 13:10:05 compute-0 ceph-osd[90500]: osd.2 0 load_pgs
Oct 01 13:10:05 compute-0 ceph-osd[90500]: osd.2 0 load_pgs opened 0 pgs
Oct 01 13:10:05 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2[90496]: 2025-10-01T13:10:05.808+0000 7f715a025740 -1 osd.2 0 log_to_monitors true
Oct 01 13:10:05 compute-0 ceph-osd[90500]: osd.2 0 log_to_monitors true
Oct 01 13:10:05 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4245549462; not ready for session (expect reconnect)
Oct 01 13:10:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:10:05 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:05 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct 01 13:10:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct 01 13:10:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 01 13:10:05 compute-0 ceph-osd[89484]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.124 iops: 8479.669 elapsed_sec: 0.354
Oct 01 13:10:05 compute-0 ceph-osd[89484]: log_channel(cluster) log [WRN] : OSD bench result of 8479.668600 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 13:10:05 compute-0 ceph-osd[89484]: osd.1 0 waiting for initial osdmap
Oct 01 13:10:05 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1[89480]: 2025-10-01T13:10:05.997+0000 7f5cbd66b640 -1 osd.1 0 waiting for initial osdmap
Oct 01 13:10:06 compute-0 ceph-osd[89484]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 01 13:10:06 compute-0 ceph-osd[89484]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 01 13:10:06 compute-0 ceph-osd[89484]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 01 13:10:06 compute-0 ceph-osd[89484]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Oct 01 13:10:06 compute-0 sudo[91236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gostcreubspjdqukoswljjxlkybqraqj ; /usr/bin/python3'
Oct 01 13:10:06 compute-0 sudo[91236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:06 compute-0 ceph-osd[89484]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 13:10:06 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-1[89480]: 2025-10-01T13:10:06.018+0000 7f5cb847c640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 13:10:06 compute-0 ceph-osd[89484]: osd.1 11 set_numa_affinity not setting numa affinity
Oct 01 13:10:06 compute-0 ceph-osd[89484]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct 01 13:10:06 compute-0 python3[91239]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:06 compute-0 podman[91240]: 2025-10-01 13:10:06.20705709 +0000 UTC m=+0.049868377 container create 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 13:10:06 compute-0 systemd[1]: Started libpod-conmon-56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223.scope.
Oct 01 13:10:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4705cdcec5a9f1a9b3949a67ae92f5a2472982adf67814bbf6539386925d217c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4705cdcec5a9f1a9b3949a67ae92f5a2472982adf67814bbf6539386925d217c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:06 compute-0 podman[91240]: 2025-10-01 13:10:06.183600814 +0000 UTC m=+0.026412131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:06 compute-0 podman[91240]: 2025-10-01 13:10:06.291110415 +0000 UTC m=+0.133921702 container init 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 13:10:06 compute-0 podman[91240]: 2025-10-01 13:10:06.302136094 +0000 UTC m=+0.144947361 container start 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:06 compute-0 podman[91240]: 2025-10-01 13:10:06.305835488 +0000 UTC m=+0.148646765 container attach 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 13:10:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct 01 13:10:06 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2715268222' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 13:10:06 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:06 compute-0 ceph-mon[74802]: from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 01 13:10:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 01 13:10:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Oct 01 13:10:06 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462] boot
Oct 01 13:10:06 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Oct 01 13:10:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct 01 13:10:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 13:10:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct 01 13:10:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct 01 13:10:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:06 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:06 compute-0 ceph-osd[89484]: osd.1 12 state: booting -> active
Oct 01 13:10:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:06 compute-0 practical_edison[90982]: {
Oct 01 13:10:06 compute-0 practical_edison[90982]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "osd_id": 0,
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "type": "bluestore"
Oct 01 13:10:06 compute-0 practical_edison[90982]:     },
Oct 01 13:10:06 compute-0 practical_edison[90982]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "osd_id": 2,
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "type": "bluestore"
Oct 01 13:10:06 compute-0 practical_edison[90982]:     },
Oct 01 13:10:06 compute-0 practical_edison[90982]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "osd_id": 1,
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:10:06 compute-0 practical_edison[90982]:         "type": "bluestore"
Oct 01 13:10:06 compute-0 practical_edison[90982]:     }
Oct 01 13:10:06 compute-0 practical_edison[90982]: }
Oct 01 13:10:06 compute-0 systemd[1]: libpod-8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba.scope: Deactivated successfully.
Oct 01 13:10:06 compute-0 podman[90780]: 2025-10-01 13:10:06.678821448 +0000 UTC m=+1.207412118 container died 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:10:06 compute-0 systemd[1]: libpod-8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba.scope: Consumed 1.026s CPU time.
Oct 01 13:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a23ea140ff85bd61a136608858fab236392475f0b38325e883e423e265081fe9-merged.mount: Deactivated successfully.
Oct 01 13:10:06 compute-0 podman[90780]: 2025-10-01 13:10:06.738033037 +0000 UTC m=+1.266623687 container remove 8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:10:06 compute-0 sudo[90616]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:06 compute-0 systemd[1]: libpod-conmon-8c6b9364d5448768401dfb5f199634d0b0c114776ca0e57dc74e13415b5904ba.scope: Deactivated successfully.
Oct 01 13:10:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 01 13:10:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 01 13:10:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 13:10:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2357689514' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:06 compute-0 sudo[91319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:06 compute-0 sudo[91319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:06 compute-0 sudo[91319]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:06 compute-0 sudo[91347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:10:06 compute-0 sudo[91347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:06 compute-0 sudo[91347]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:06 compute-0 sudo[91372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:06 compute-0 sudo[91372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:06 compute-0 sudo[91372]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:07 compute-0 sudo[91397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:07 compute-0 sudo[91397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:07 compute-0 sudo[91397]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:07 compute-0 sudo[91422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:07 compute-0 sudo[91422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:07 compute-0 sudo[91422]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:07 compute-0 sudo[91447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:10:07 compute-0 sudo[91447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct 01 13:10:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 13:10:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2357689514' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct 01 13:10:07 compute-0 ceph-osd[90500]: osd.2 0 done with init, starting boot process
Oct 01 13:10:07 compute-0 pedantic_edison[91255]: pool 'vms' created
Oct 01 13:10:07 compute-0 ceph-osd[90500]: osd.2 0 start_boot
Oct 01 13:10:07 compute-0 ceph-osd[90500]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 01 13:10:07 compute-0 ceph-osd[90500]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 01 13:10:07 compute-0 ceph-osd[90500]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 01 13:10:07 compute-0 ceph-osd[90500]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 01 13:10:07 compute-0 ceph-osd[90500]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 01 13:10:07 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct 01 13:10:07 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:07 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:07 compute-0 ceph-mon[74802]: pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 01 13:10:07 compute-0 ceph-mon[74802]: OSD bench result of 8479.668600 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 13:10:07 compute-0 ceph-mon[74802]: from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 01 13:10:07 compute-0 ceph-mon[74802]: osd.1 [v2:192.168.122.100:6806/4245549462,v1:192.168.122.100:6807/4245549462] boot
Oct 01 13:10:07 compute-0 ceph-mon[74802]: osdmap e12: 3 total, 2 up, 3 in
Oct 01 13:10:07 compute-0 ceph-mon[74802]: from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 01 13:10:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 01 13:10:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:07 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2357689514' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:07 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct 01 13:10:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:07 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:07 compute-0 systemd[1]: libpod-56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223.scope: Deactivated successfully.
Oct 01 13:10:07 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:07 compute-0 podman[91240]: 2025-10-01 13:10:07.408421127 +0000 UTC m=+1.251232404 container died 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4705cdcec5a9f1a9b3949a67ae92f5a2472982adf67814bbf6539386925d217c-merged.mount: Deactivated successfully.
Oct 01 13:10:07 compute-0 podman[91240]: 2025-10-01 13:10:07.518825311 +0000 UTC m=+1.361636578 container remove 56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223 (image=quay.io/ceph/ceph:v18, name=pedantic_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:10:07 compute-0 ceph-mgr[75103]: [devicehealth INFO root] creating main.db for devicehealth
Oct 01 13:10:07 compute-0 systemd[1]: libpod-conmon-56e9adabe4135b054da5feb2e70b1e0aa19102fe6e230bc4f58b2b59f3716223.scope: Deactivated successfully.
Oct 01 13:10:07 compute-0 sudo[91236]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:07 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct 01 13:10:07 compute-0 ceph-mgr[75103]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct 01 13:10:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 01 13:10:07 compute-0 sudo[91577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tklkuciczvlxndjkrliqormifjicbcve ; /usr/bin/python3'
Oct 01 13:10:07 compute-0 sudo[91577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:07 compute-0 sudo[91578]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Oct 01 13:10:07 compute-0 sudo[91578]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 13:10:07 compute-0 sudo[91578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Oct 01 13:10:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v44: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 01 13:10:07 compute-0 sudo[91578]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 01 13:10:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct 01 13:10:07 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 13:10:07 compute-0 podman[91598]: 2025-10-01 13:10:07.826646285 +0000 UTC m=+0.058949103 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:07 compute-0 python3[91581]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:07 compute-0 podman[91598]: 2025-10-01 13:10:07.93605822 +0000 UTC m=+0.168361018 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:10:07 compute-0 podman[91618]: 2025-10-01 13:10:07.995250137 +0000 UTC m=+0.083652914 container create a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 13:10:08 compute-0 systemd[1]: Started libpod-conmon-a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721.scope.
Oct 01 13:10:08 compute-0 podman[91618]: 2025-10-01 13:10:07.95644893 +0000 UTC m=+0.044851707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483cd5114d3229f0d5103a7b8ed10145d96047c9fe97b447629a05b6dc545aff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483cd5114d3229f0d5103a7b8ed10145d96047c9fe97b447629a05b6dc545aff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:08 compute-0 podman[91618]: 2025-10-01 13:10:08.085133225 +0000 UTC m=+0.173536012 container init a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:10:08 compute-0 podman[91618]: 2025-10-01 13:10:08.093455119 +0000 UTC m=+0.181857896 container start a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:08 compute-0 podman[91618]: 2025-10-01 13:10:08.10027215 +0000 UTC m=+0.188674927 container attach a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:10:08 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct 01 13:10:08 compute-0 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct 01 13:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:08 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:08 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.puxjpb(active, since 80s)
Oct 01 13:10:08 compute-0 sudo[91447]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct 01 13:10:08 compute-0 ceph-mon[74802]: from='osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 01 13:10:08 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2357689514' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:08 compute-0 ceph-mon[74802]: osdmap e13: 3 total, 2 up, 3 in
Oct 01 13:10:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:08 compute-0 ceph-mon[74802]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 01 13:10:08 compute-0 ceph-mon[74802]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 01 13:10:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 01 13:10:08 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct 01 13:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:08 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:08 compute-0 sudo[91754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:08 compute-0 sudo[91754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:08 compute-0 sudo[91754]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 13:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1742323114' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:08 compute-0 sudo[91779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:08 compute-0 sudo[91779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:08 compute-0 sudo[91779]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:08 compute-0 sudo[91807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:08 compute-0 sudo[91807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:08 compute-0 sudo[91807]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:08 compute-0 sudo[91832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- inventory --format=json-pretty --filter-for-batch
Oct 01 13:10:08 compute-0 sudo[91832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:09 compute-0 podman[91896]: 2025-10-01 13:10:09.100670467 +0000 UTC m=+0.055568999 container create 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:10:09 compute-0 systemd[1]: Started libpod-conmon-028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e.scope.
Oct 01 13:10:09 compute-0 podman[91896]: 2025-10-01 13:10:09.069835073 +0000 UTC m=+0.024733705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:09 compute-0 podman[91896]: 2025-10-01 13:10:09.211042469 +0000 UTC m=+0.165941091 container init 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:10:09 compute-0 podman[91896]: 2025-10-01 13:10:09.218433236 +0000 UTC m=+0.173331768 container start 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:10:09 compute-0 pedantic_feynman[91912]: 167 167
Oct 01 13:10:09 compute-0 systemd[1]: libpod-028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e.scope: Deactivated successfully.
Oct 01 13:10:09 compute-0 podman[91896]: 2025-10-01 13:10:09.236124411 +0000 UTC m=+0.191022973 container attach 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:09 compute-0 podman[91896]: 2025-10-01 13:10:09.237071998 +0000 UTC m=+0.191970540 container died 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5e7d07730e6bf319a0cf69b32c0a26116b06e6edea2a3e4f6427fb3d1413814-merged.mount: Deactivated successfully.
Oct 01 13:10:09 compute-0 podman[91896]: 2025-10-01 13:10:09.340411883 +0000 UTC m=+0.295310425 container remove 028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:09 compute-0 systemd[1]: libpod-conmon-028f6d88b2503981de972514d00cc307e124c8fca16f33f8cd4a06132336a83e.scope: Deactivated successfully.
Oct 01 13:10:09 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct 01 13:10:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:09 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:09 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:09 compute-0 ceph-mon[74802]: purged_snaps scrub starts
Oct 01 13:10:09 compute-0 ceph-mon[74802]: purged_snaps scrub ok
Oct 01 13:10:09 compute-0 ceph-mon[74802]: pgmap v44: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 01 13:10:09 compute-0 ceph-mon[74802]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:09 compute-0 ceph-mon[74802]: mgrmap e9: compute-0.puxjpb(active, since 80s)
Oct 01 13:10:09 compute-0 ceph-mon[74802]: osdmap e14: 3 total, 2 up, 3 in
Oct 01 13:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:09 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1742323114' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:09 compute-0 podman[91938]: 2025-10-01 13:10:09.520348034 +0000 UTC m=+0.065110425 container create 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:10:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct 01 13:10:09 compute-0 podman[91938]: 2025-10-01 13:10:09.481971099 +0000 UTC m=+0.026733540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:09 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1742323114' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct 01 13:10:09 compute-0 blissful_engelbart[91661]: pool 'volumes' created
Oct 01 13:10:09 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct 01 13:10:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:09 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:09 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:09 compute-0 systemd[1]: Started libpod-conmon-72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c.scope.
Oct 01 13:10:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:09 compute-0 systemd[1]: libpod-a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721.scope: Deactivated successfully.
Oct 01 13:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:09 compute-0 podman[91938]: 2025-10-01 13:10:09.653752611 +0000 UTC m=+0.198514962 container init 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:10:09 compute-0 podman[91938]: 2025-10-01 13:10:09.663523876 +0000 UTC m=+0.208286247 container start 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:09 compute-0 podman[91938]: 2025-10-01 13:10:09.667662591 +0000 UTC m=+0.212424932 container attach 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:09 compute-0 podman[91958]: 2025-10-01 13:10:09.671753746 +0000 UTC m=+0.035582158 container died a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-483cd5114d3229f0d5103a7b8ed10145d96047c9fe97b447629a05b6dc545aff-merged.mount: Deactivated successfully.
Oct 01 13:10:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v47: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 01 13:10:09 compute-0 podman[91958]: 2025-10-01 13:10:09.759923016 +0000 UTC m=+0.123751428 container remove a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721 (image=quay.io/ceph/ceph:v18, name=blissful_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:09 compute-0 systemd[1]: libpod-conmon-a62c8c9ed7cc38037709384ecb529282d77f89be524bc183210c9516e430e721.scope: Deactivated successfully.
Oct 01 13:10:09 compute-0 sudo[91577]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:09 compute-0 sudo[91998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flojzqkwtynbclbefebmqnzuzlvsjxim ; /usr/bin/python3'
Oct 01 13:10:09 compute-0 sudo[91998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:10 compute-0 python3[92000]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:10 compute-0 podman[92001]: 2025-10-01 13:10:10.187431642 +0000 UTC m=+0.051481382 container create 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:10:10 compute-0 systemd[1]: Started libpod-conmon-057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d.scope.
Oct 01 13:10:10 compute-0 podman[92001]: 2025-10-01 13:10:10.156325882 +0000 UTC m=+0.020375622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8580ba48e516ebb60537667de8afc821dfc7165a1fb407c15e38fda2ac9a6043/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8580ba48e516ebb60537667de8afc821dfc7165a1fb407c15e38fda2ac9a6043/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:10 compute-0 podman[92001]: 2025-10-01 13:10:10.279465632 +0000 UTC m=+0.143515362 container init 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:10 compute-0 podman[92001]: 2025-10-01 13:10:10.289997936 +0000 UTC m=+0.154047686 container start 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:10 compute-0 podman[92001]: 2025-10-01 13:10:10.299814361 +0000 UTC m=+0.163864101 container attach 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:10:10 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct 01 13:10:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:10 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:10 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct 01 13:10:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Oct 01 13:10:10 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Oct 01 13:10:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:10 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:10 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:10 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1742323114' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:10 compute-0 ceph-mon[74802]: osdmap e15: 3 total, 2 up, 3 in
Oct 01 13:10:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:10 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 13:10:10 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805017429' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:10 compute-0 ceph-osd[90500]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 31.384 iops: 8034.385 elapsed_sec: 0.373
Oct 01 13:10:10 compute-0 ceph-osd[90500]: log_channel(cluster) log [WRN] : OSD bench result of 8034.385004 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 13:10:10 compute-0 ceph-osd[90500]: osd.2 0 waiting for initial osdmap
Oct 01 13:10:10 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2[90496]: 2025-10-01T13:10:10.868+0000 7f7155fa5640 -1 osd.2 0 waiting for initial osdmap
Oct 01 13:10:10 compute-0 ceph-osd[90500]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 01 13:10:10 compute-0 ceph-osd[90500]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 01 13:10:10 compute-0 ceph-osd[90500]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 01 13:10:10 compute-0 ceph-osd[90500]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Oct 01 13:10:10 compute-0 ceph-osd[90500]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 13:10:10 compute-0 ceph-osd[90500]: osd.2 16 set_numa_affinity not setting numa affinity
Oct 01 13:10:10 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-osd-2[90496]: 2025-10-01T13:10:10.895+0000 7f71515cd640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 01 13:10:10 compute-0 ceph-osd[90500]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]: [
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:     {
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         "available": false,
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         "ceph_device": false,
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         "lsm_data": {},
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         "lvs": [],
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         "path": "/dev/sr0",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         "rejected_reasons": [
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "Has a FileSystem",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "Insufficient space (<5GB)"
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         ],
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         "sys_api": {
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "actuators": null,
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "device_nodes": "sr0",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "devname": "sr0",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "human_readable_size": "482.00 KB",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "id_bus": "ata",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "model": "QEMU DVD-ROM",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "nr_requests": "2",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "parent": "/dev/sr0",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "partitions": {},
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "path": "/dev/sr0",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "removable": "1",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "rev": "2.5+",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "ro": "0",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "rotational": "0",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "sas_address": "",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "sas_device_handle": "",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "scheduler_mode": "mq-deadline",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "sectors": 0,
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "sectorsize": "2048",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "size": 493568.0,
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "support_discard": "2048",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "type": "disk",
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:             "vendor": "QEMU"
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:         }
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]:     }
Oct 01 13:10:11 compute-0 exciting_matsumoto[91955]: ]
Oct 01 13:10:11 compute-0 systemd[1]: libpod-72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c.scope: Deactivated successfully.
Oct 01 13:10:11 compute-0 systemd[1]: libpod-72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c.scope: Consumed 1.347s CPU time.
Oct 01 13:10:11 compute-0 podman[91938]: 2025-10-01 13:10:11.030317286 +0000 UTC m=+1.575079637 container died 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4caea5725ff33c612d22ee8df310bfc61017e6193c8634f98a8bb336f5f55529-merged.mount: Deactivated successfully.
Oct 01 13:10:11 compute-0 podman[91938]: 2025-10-01 13:10:11.094071973 +0000 UTC m=+1.638834324 container remove 72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:10:11 compute-0 systemd[1]: libpod-conmon-72c7b2afc413a70675ddf22df630138cb360d701b9a1e6d5f48d76ccb19b455c.scope: Deactivated successfully.
Oct 01 13:10:11 compute-0 sudo[91832]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43640k
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43640k
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44687633: error parsing value: Value '44687633' is below minimum 939524096
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44687633: error parsing value: Value '44687633' is below minimum 939524096
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev b1923273-3f9b-4fc1-89ce-482262f6440e does not exist
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 89b5ae69-71f3-4f81-ab44-72cf218a25cb does not exist
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f9371bef-2542-4079-9f16-3b23f3469d42 does not exist
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:11 compute-0 sudo[93682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:11 compute-0 sudo[93682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:11 compute-0 sudo[93682]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:11 compute-0 sudo[93707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:11 compute-0 sudo[93707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:11 compute-0 sudo[93707]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2247178069; not ready for session (expect reconnect)
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct 01 13:10:11 compute-0 sudo[93732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:11 compute-0 sudo[93732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:11 compute-0 sudo[93732]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:11 compute-0 sudo[93757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:10:11 compute-0 sudo[93757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct 01 13:10:11 compute-0 ceph-mon[74802]: pgmap v47: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 01 13:10:11 compute-0 ceph-mon[74802]: osdmap e16: 3 total, 2 up, 3 in
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2805017429' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805017429' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct 01 13:10:11 compute-0 serene_benz[92017]: pool 'backups' created
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069] boot
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct 01 13:10:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct 01 13:10:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:11 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:11 compute-0 ceph-osd[90500]: osd.2 17 state: booting -> active
Oct 01 13:10:11 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[13,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:11 compute-0 systemd[1]: libpod-057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d.scope: Deactivated successfully.
Oct 01 13:10:11 compute-0 podman[92001]: 2025-10-01 13:10:11.658753043 +0000 UTC m=+1.522802773 container died 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 01 13:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8580ba48e516ebb60537667de8afc821dfc7165a1fb407c15e38fda2ac9a6043-merged.mount: Deactivated successfully.
Oct 01 13:10:11 compute-0 podman[92001]: 2025-10-01 13:10:11.704835003 +0000 UTC m=+1.568884723 container remove 057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d (image=quay.io/ceph/ceph:v18, name=serene_benz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:10:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v50: 4 pgs: 3 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 01 13:10:11 compute-0 systemd[1]: libpod-conmon-057c3aeb4605bbdc2c6776416256b63336f661d0f8551c7e1b113fb041abc86d.scope: Deactivated successfully.
Oct 01 13:10:11 compute-0 sudo[91998]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:11 compute-0 podman[93832]: 2025-10-01 13:10:11.877793789 +0000 UTC m=+0.070102675 container create c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:11 compute-0 systemd[1]: Started libpod-conmon-c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e.scope.
Oct 01 13:10:11 compute-0 sudo[93869]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feazdqydhfaiiywxkrnvzmrpvjfaexua ; /usr/bin/python3'
Oct 01 13:10:11 compute-0 sudo[93869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:11 compute-0 podman[93832]: 2025-10-01 13:10:11.845802073 +0000 UTC m=+0.038111009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:11 compute-0 podman[93832]: 2025-10-01 13:10:11.961885735 +0000 UTC m=+0.154194621 container init c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:10:11 compute-0 podman[93832]: 2025-10-01 13:10:11.974027415 +0000 UTC m=+0.166336301 container start c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:10:11 compute-0 podman[93832]: 2025-10-01 13:10:11.978023427 +0000 UTC m=+0.170332353 container attach c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 13:10:11 compute-0 musing_shaw[93873]: 167 167
Oct 01 13:10:11 compute-0 systemd[1]: libpod-c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e.scope: Deactivated successfully.
Oct 01 13:10:11 compute-0 podman[93832]: 2025-10-01 13:10:11.9824296 +0000 UTC m=+0.174738496 container died c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f948d4136eb64b500db2d7d33024bf16659353a6a70f835f5b9ad668e580b65-merged.mount: Deactivated successfully.
Oct 01 13:10:12 compute-0 podman[93832]: 2025-10-01 13:10:12.028394278 +0000 UTC m=+0.220703164 container remove c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:12 compute-0 systemd[1]: libpod-conmon-c9d8cde048c68be4c5ee7c77899d4eb09bc5c138ed7a36d4c7ba1354dfcaa58e.scope: Deactivated successfully.
Oct 01 13:10:12 compute-0 python3[93875]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:12 compute-0 podman[93898]: 2025-10-01 13:10:12.18013046 +0000 UTC m=+0.041072843 container create cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:10:12 compute-0 podman[93899]: 2025-10-01 13:10:12.18479691 +0000 UTC m=+0.044699793 container create 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:12 compute-0 systemd[1]: Started libpod-conmon-6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45.scope.
Oct 01 13:10:12 compute-0 systemd[1]: Started libpod-conmon-cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667.scope.
Oct 01 13:10:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a47ef7c19deb3dd36d2fbd387a35ce1794264830eeab44d03f790e5a13f301/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a47ef7c19deb3dd36d2fbd387a35ce1794264830eeab44d03f790e5a13f301/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:12 compute-0 podman[93898]: 2025-10-01 13:10:12.161941459 +0000 UTC m=+0.022883872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:12 compute-0 podman[93899]: 2025-10-01 13:10:12.161709843 +0000 UTC m=+0.021612746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:12 compute-0 podman[93899]: 2025-10-01 13:10:12.276051656 +0000 UTC m=+0.135954589 container init 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:10:12 compute-0 podman[93898]: 2025-10-01 13:10:12.280262425 +0000 UTC m=+0.141204818 container init cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:12 compute-0 podman[93899]: 2025-10-01 13:10:12.295845341 +0000 UTC m=+0.155748224 container start 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:10:12 compute-0 podman[93899]: 2025-10-01 13:10:12.300494172 +0000 UTC m=+0.160397075 container attach 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:10:12 compute-0 podman[93898]: 2025-10-01 13:10:12.303629839 +0000 UTC m=+0.164572262 container start cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:10:12 compute-0 podman[93898]: 2025-10-01 13:10:12.309159714 +0000 UTC m=+0.170102117 container attach cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:12 compute-0 ceph-mon[74802]: OSD bench result of 8034.385004 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 01 13:10:12 compute-0 ceph-mon[74802]: Adjusting osd_memory_target on compute-0 to 43640k
Oct 01 13:10:12 compute-0 ceph-mon[74802]: Unable to set osd_memory_target on compute-0 to 44687633: error parsing value: Value '44687633' is below minimum 939524096
Oct 01 13:10:12 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2805017429' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:12 compute-0 ceph-mon[74802]: osd.2 [v2:192.168.122.100:6810/2247178069,v1:192.168.122.100:6811/2247178069] boot
Oct 01 13:10:12 compute-0 ceph-mon[74802]: osdmap e17: 3 total, 3 up, 3 in
Oct 01 13:10:12 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 01 13:10:12 compute-0 ceph-mon[74802]: pgmap v50: 4 pgs: 3 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 01 13:10:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct 01 13:10:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct 01 13:10:12 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct 01 13:10:12 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:12 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[13,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 13:10:12 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/797650243' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:13 compute-0 exciting_nash[93931]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:10:13 compute-0 exciting_nash[93931]: --> relative data size: 1.0
Oct 01 13:10:13 compute-0 exciting_nash[93931]: --> All data devices are unavailable
Oct 01 13:10:13 compute-0 systemd[1]: libpod-cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667.scope: Deactivated successfully.
Oct 01 13:10:13 compute-0 podman[93898]: 2025-10-01 13:10:13.363682997 +0000 UTC m=+1.224625380 container died cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 13:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bac3d3f71811e5368fe0f6b10c6cfa8b4261069e2f38147eba7733b7942d5f32-merged.mount: Deactivated successfully.
Oct 01 13:10:13 compute-0 podman[93898]: 2025-10-01 13:10:13.422592738 +0000 UTC m=+1.283535141 container remove cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:10:13 compute-0 systemd[1]: libpod-conmon-cf0f36ad9b0d130fa096691b826fc78ac549d5bed84e8d4ba8640a68f82df667.scope: Deactivated successfully.
Oct 01 13:10:13 compute-0 sudo[93757]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:13 compute-0 sudo[93998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:13 compute-0 sudo[93998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:13 compute-0 sudo[93998]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:13 compute-0 sudo[94023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:13 compute-0 sudo[94023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:13 compute-0 sudo[94023]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:13 compute-0 ceph-mon[74802]: osdmap e18: 3 total, 3 up, 3 in
Oct 01 13:10:13 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/797650243' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct 01 13:10:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/797650243' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct 01 13:10:13 compute-0 quirky_panini[93929]: pool 'images' created
Oct 01 13:10:13 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct 01 13:10:13 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:13 compute-0 systemd[1]: libpod-6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45.scope: Deactivated successfully.
Oct 01 13:10:13 compute-0 podman[93899]: 2025-10-01 13:10:13.674655269 +0000 UTC m=+1.534558192 container died 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:10:13 compute-0 sudo[94048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:13 compute-0 sudo[94048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:13 compute-0 sudo[94048]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5a47ef7c19deb3dd36d2fbd387a35ce1794264830eeab44d03f790e5a13f301-merged.mount: Deactivated successfully.
Oct 01 13:10:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v53: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:13 compute-0 podman[93899]: 2025-10-01 13:10:13.726656976 +0000 UTC m=+1.586559859 container remove 6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45 (image=quay.io/ceph/ceph:v18, name=quirky_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:10:13 compute-0 systemd[1]: libpod-conmon-6fd61786b7ce54357614287aaf664252c57887f4afd2d5a93925c4daaf1ecf45.scope: Deactivated successfully.
Oct 01 13:10:13 compute-0 sudo[94085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:10:13 compute-0 sudo[94085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:13 compute-0 sudo[93869]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:13 compute-0 sudo[94141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhejzcmfjhcxgovsxqnttvmcamduwlck ; /usr/bin/python3'
Oct 01 13:10:13 compute-0 sudo[94141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:14 compute-0 python3[94147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:14 compute-0 podman[94175]: 2025-10-01 13:10:14.054881371 +0000 UTC m=+0.036513614 container create d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 13:10:14 compute-0 systemd[1]: Started libpod-conmon-d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f.scope.
Oct 01 13:10:14 compute-0 podman[94189]: 2025-10-01 13:10:14.097382762 +0000 UTC m=+0.043600203 container create 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:14 compute-0 systemd[1]: Started libpod-conmon-97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4.scope.
Oct 01 13:10:14 compute-0 podman[94175]: 2025-10-01 13:10:14.038646837 +0000 UTC m=+0.020279090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:14 compute-0 podman[94175]: 2025-10-01 13:10:14.136075406 +0000 UTC m=+0.117707659 container init d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16043874e8b148866b76e79a2771d026406cf5d9aa0a3d1405616265a4f945db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16043874e8b148866b76e79a2771d026406cf5d9aa0a3d1405616265a4f945db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:14 compute-0 podman[94175]: 2025-10-01 13:10:14.144039179 +0000 UTC m=+0.125671402 container start d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:14 compute-0 podman[94175]: 2025-10-01 13:10:14.148053982 +0000 UTC m=+0.129686215 container attach d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 01 13:10:14 compute-0 reverent_northcutt[94204]: 167 167
Oct 01 13:10:14 compute-0 systemd[1]: libpod-d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f.scope: Deactivated successfully.
Oct 01 13:10:14 compute-0 podman[94189]: 2025-10-01 13:10:14.151274392 +0000 UTC m=+0.097491853 container init 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:14 compute-0 podman[94175]: 2025-10-01 13:10:14.151836867 +0000 UTC m=+0.133469100 container died d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:14 compute-0 podman[94189]: 2025-10-01 13:10:14.156861969 +0000 UTC m=+0.103079400 container start 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:10:14 compute-0 podman[94189]: 2025-10-01 13:10:14.162669041 +0000 UTC m=+0.108886472 container attach 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 13:10:14 compute-0 podman[94189]: 2025-10-01 13:10:14.080315724 +0000 UTC m=+0.026533185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-947e2e6d9da180b650ab9b5acb7e4887d413bce61b5bacd978b1deccbbe894a4-merged.mount: Deactivated successfully.
Oct 01 13:10:14 compute-0 podman[94175]: 2025-10-01 13:10:14.191039525 +0000 UTC m=+0.172671758 container remove d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:10:14 compute-0 systemd[1]: libpod-conmon-d850af3dd8089d5fef474b9e28bbcdd5dc2d25ab5b3815f023cb9a119e3f624f.scope: Deactivated successfully.
Oct 01 13:10:14 compute-0 podman[94234]: 2025-10-01 13:10:14.342959312 +0000 UTC m=+0.055458215 container create df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:14 compute-0 systemd[1]: Started libpod-conmon-df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e.scope.
Oct 01 13:10:14 compute-0 podman[94234]: 2025-10-01 13:10:14.308891597 +0000 UTC m=+0.021390550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:14 compute-0 podman[94234]: 2025-10-01 13:10:14.424001732 +0000 UTC m=+0.136500655 container init df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:10:14 compute-0 podman[94234]: 2025-10-01 13:10:14.429713652 +0000 UTC m=+0.142212565 container start df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:14 compute-0 podman[94234]: 2025-10-01 13:10:14.433075347 +0000 UTC m=+0.145574270 container attach df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 13:10:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1651900628' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct 01 13:10:14 compute-0 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:10:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1651900628' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct 01 13:10:14 compute-0 fervent_napier[94209]: pool 'cephfs.cephfs.meta' created
Oct 01 13:10:14 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct 01 13:10:14 compute-0 systemd[1]: libpod-97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4.scope: Deactivated successfully.
Oct 01 13:10:14 compute-0 podman[94189]: 2025-10-01 13:10:14.685670023 +0000 UTC m=+0.631887454 container died 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:10:14 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/797650243' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:14 compute-0 ceph-mon[74802]: osdmap e19: 3 total, 3 up, 3 in
Oct 01 13:10:14 compute-0 ceph-mon[74802]: pgmap v53: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:14 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1651900628' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:14 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-16043874e8b148866b76e79a2771d026406cf5d9aa0a3d1405616265a4f945db-merged.mount: Deactivated successfully.
Oct 01 13:10:14 compute-0 podman[94189]: 2025-10-01 13:10:14.726232809 +0000 UTC m=+0.672450240 container remove 97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4 (image=quay.io/ceph/ceph:v18, name=fervent_napier, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:10:14 compute-0 systemd[1]: libpod-conmon-97765aa51b06185119fc92ca94aabc8139edfd8fbf4b5c165f59ab36597947d4.scope: Deactivated successfully.
Oct 01 13:10:14 compute-0 sudo[94141]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:14 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:14 compute-0 sudo[94313]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyhlzwuojdwdhzagfmdcumyixqtroqnv ; /usr/bin/python3'
Oct 01 13:10:14 compute-0 sudo[94313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:15 compute-0 python3[94315]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:15 compute-0 podman[94316]: 2025-10-01 13:10:15.083659603 +0000 UTC m=+0.045811054 container create a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 13:10:15 compute-0 systemd[1]: Started libpod-conmon-a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296.scope.
Oct 01 13:10:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fae9acd5d3501a1d8943a5c0d522a00416aefdd2f607303d52e390a2da93cd02/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fae9acd5d3501a1d8943a5c0d522a00416aefdd2f607303d52e390a2da93cd02/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:15 compute-0 podman[94316]: 2025-10-01 13:10:15.064046494 +0000 UTC m=+0.026197995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]: {
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:     "0": [
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:         {
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "devices": [
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "/dev/loop3"
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             ],
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_name": "ceph_lv0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_size": "21470642176",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "name": "ceph_lv0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "tags": {
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.crush_device_class": "",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.encrypted": "0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.osd_id": "0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.type": "block",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.vdo": "0"
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             },
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "type": "block",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "vg_name": "ceph_vg0"
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:         }
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:     ],
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:     "1": [
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:         {
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "devices": [
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "/dev/loop4"
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             ],
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_name": "ceph_lv1",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_size": "21470642176",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "name": "ceph_lv1",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "tags": {
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.crush_device_class": "",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.encrypted": "0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.osd_id": "1",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.type": "block",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.vdo": "0"
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             },
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "type": "block",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "vg_name": "ceph_vg1"
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:         }
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:     ],
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:     "2": [
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:         {
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "devices": [
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "/dev/loop5"
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             ],
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_name": "ceph_lv2",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_size": "21470642176",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "name": "ceph_lv2",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "tags": {
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.crush_device_class": "",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.encrypted": "0",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.osd_id": "2",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.type": "block",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:                 "ceph.vdo": "0"
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             },
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "type": "block",
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:             "vg_name": "ceph_vg2"
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:         }
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]:     ]
Oct 01 13:10:15 compute-0 ecstatic_banzai[94250]: }
Oct 01 13:10:15 compute-0 podman[94316]: 2025-10-01 13:10:15.163179081 +0000 UTC m=+0.125330582 container init a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:15 compute-0 podman[94316]: 2025-10-01 13:10:15.170687811 +0000 UTC m=+0.132839272 container start a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 13:10:15 compute-0 podman[94316]: 2025-10-01 13:10:15.174535659 +0000 UTC m=+0.136687130 container attach a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:10:15 compute-0 systemd[1]: libpod-df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e.scope: Deactivated successfully.
Oct 01 13:10:15 compute-0 podman[94234]: 2025-10-01 13:10:15.18419229 +0000 UTC m=+0.896691203 container died df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-eec945a91a4a2f1796a9b2b1d2a7d80c2eac515c2cc9239287d14241bf2ed54a-merged.mount: Deactivated successfully.
Oct 01 13:10:15 compute-0 podman[94234]: 2025-10-01 13:10:15.24881397 +0000 UTC m=+0.961312883 container remove df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:15 compute-0 systemd[1]: libpod-conmon-df09d676e2ac60a569739ca834c274745e5606dead18ca98fd29701532ef457e.scope: Deactivated successfully.
Oct 01 13:10:15 compute-0 sudo[94085]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:15 compute-0 sudo[94351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:15 compute-0 sudo[94351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:15 compute-0 sudo[94351]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:15 compute-0 sudo[94376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:15 compute-0 sudo[94376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:15 compute-0 sudo[94376]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:15 compute-0 sudo[94401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:15 compute-0 sudo[94401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:15 compute-0 sudo[94401]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:15 compute-0 sudo[94427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:10:15 compute-0 sudo[94427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct 01 13:10:15 compute-0 ceph-mon[74802]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:10:15 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1651900628' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:15 compute-0 ceph-mon[74802]: osdmap e20: 3 total, 3 up, 3 in
Oct 01 13:10:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct 01 13:10:15 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct 01 13:10:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct 01 13:10:15 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1231343553' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:15 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v56: 6 pgs: 2 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:15 compute-0 podman[94514]: 2025-10-01 13:10:15.850476876 +0000 UTC m=+0.047287566 container create f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:10:15 compute-0 systemd[1]: Started libpod-conmon-f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a.scope.
Oct 01 13:10:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:15 compute-0 podman[94514]: 2025-10-01 13:10:15.91916441 +0000 UTC m=+0.115975180 container init f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:10:15 compute-0 podman[94514]: 2025-10-01 13:10:15.823418727 +0000 UTC m=+0.020229507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:15 compute-0 podman[94514]: 2025-10-01 13:10:15.926988119 +0000 UTC m=+0.123798859 container start f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:15 compute-0 jovial_dubinsky[94530]: 167 167
Oct 01 13:10:15 compute-0 systemd[1]: libpod-f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a.scope: Deactivated successfully.
Oct 01 13:10:15 compute-0 podman[94514]: 2025-10-01 13:10:15.933965855 +0000 UTC m=+0.130776545 container attach f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:10:15 compute-0 podman[94514]: 2025-10-01 13:10:15.934347256 +0000 UTC m=+0.131157946 container died f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b48295ced68d863e3d8dcfaffa10abb236cb2e428c5fcd98698d40fe90ad70-merged.mount: Deactivated successfully.
Oct 01 13:10:15 compute-0 podman[94514]: 2025-10-01 13:10:15.976810025 +0000 UTC m=+0.173620715 container remove f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:15 compute-0 systemd[1]: libpod-conmon-f539716568eb1e47eaad31de480da34dea6d94a68219927ac59d85d6bbdb976a.scope: Deactivated successfully.
Oct 01 13:10:16 compute-0 podman[94553]: 2025-10-01 13:10:16.151819358 +0000 UTC m=+0.038488159 container create b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:16 compute-0 systemd[1]: Started libpod-conmon-b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b.scope.
Oct 01 13:10:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:16 compute-0 podman[94553]: 2025-10-01 13:10:16.133258918 +0000 UTC m=+0.019927739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:16 compute-0 podman[94553]: 2025-10-01 13:10:16.234006781 +0000 UTC m=+0.120675632 container init b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:10:16 compute-0 podman[94553]: 2025-10-01 13:10:16.243117305 +0000 UTC m=+0.129786096 container start b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:16 compute-0 podman[94553]: 2025-10-01 13:10:16.248841636 +0000 UTC m=+0.135510437 container attach b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct 01 13:10:16 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1231343553' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct 01 13:10:16 compute-0 ceph-mon[74802]: osdmap e21: 3 total, 3 up, 3 in
Oct 01 13:10:16 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1231343553' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 01 13:10:16 compute-0 ceph-mon[74802]: pgmap v56: 6 pgs: 2 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:16 compute-0 blissful_johnson[94335]: pool 'cephfs.cephfs.data' created
Oct 01 13:10:16 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct 01 13:10:16 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:16 compute-0 systemd[1]: libpod-a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296.scope: Deactivated successfully.
Oct 01 13:10:16 compute-0 podman[94316]: 2025-10-01 13:10:16.739992096 +0000 UTC m=+1.702143577 container died a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fae9acd5d3501a1d8943a5c0d522a00416aefdd2f607303d52e390a2da93cd02-merged.mount: Deactivated successfully.
Oct 01 13:10:16 compute-0 podman[94316]: 2025-10-01 13:10:16.784569775 +0000 UTC m=+1.746721226 container remove a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296 (image=quay.io/ceph/ceph:v18, name=blissful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:16 compute-0 systemd[1]: libpod-conmon-a128323eb42790b6a7dd8243c8b6716ab15655195043608f5b0a64519f006296.scope: Deactivated successfully.
Oct 01 13:10:16 compute-0 sudo[94313]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:16 compute-0 sudo[94614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orhrxpikbrptlmpkunlegykatusgfgwh ; /usr/bin/python3'
Oct 01 13:10:16 compute-0 sudo[94614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:17 compute-0 python3[94616]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:17 compute-0 podman[94626]: 2025-10-01 13:10:17.16489882 +0000 UTC m=+0.050496576 container create fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:10:17 compute-0 systemd[1]: Started libpod-conmon-fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21.scope.
Oct 01 13:10:17 compute-0 podman[94626]: 2025-10-01 13:10:17.140936809 +0000 UTC m=+0.026534555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383ed642f3137dbcdc107e8cf21eb05f8ca1ce7fa63f87c013b2babe277d349e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383ed642f3137dbcdc107e8cf21eb05f8ca1ce7fa63f87c013b2babe277d349e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:17 compute-0 podman[94626]: 2025-10-01 13:10:17.257172075 +0000 UTC m=+0.142769801 container init fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:17 compute-0 podman[94626]: 2025-10-01 13:10:17.264293205 +0000 UTC m=+0.149890921 container start fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:10:17 compute-0 podman[94626]: 2025-10-01 13:10:17.268132442 +0000 UTC m=+0.153730208 container attach fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]: {
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "osd_id": 0,
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "type": "bluestore"
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:     },
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "osd_id": 2,
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "type": "bluestore"
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:     },
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "osd_id": 1,
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:         "type": "bluestore"
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]:     }
Oct 01 13:10:17 compute-0 beautiful_shirley[94569]: }
Oct 01 13:10:17 compute-0 systemd[1]: libpod-b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b.scope: Deactivated successfully.
Oct 01 13:10:17 compute-0 podman[94553]: 2025-10-01 13:10:17.322814254 +0000 UTC m=+1.209483075 container died b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:10:17 compute-0 systemd[1]: libpod-b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b.scope: Consumed 1.074s CPU time.
Oct 01 13:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-970d2791ef6d3810437fea926c8a580938b4e8e17a5437b2b593e10f386dab6e-merged.mount: Deactivated successfully.
Oct 01 13:10:17 compute-0 podman[94553]: 2025-10-01 13:10:17.400296175 +0000 UTC m=+1.286965006 container remove b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:17 compute-0 systemd[1]: libpod-conmon-b982f4955fe9952f5c7cf1c538843ba820c1704b760ce9cb289397b970962f7b.scope: Deactivated successfully.
Oct 01 13:10:17 compute-0 sudo[94427]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:17 compute-0 sudo[94673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:17 compute-0 sudo[94673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:17 compute-0 sudo[94673]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:17 compute-0 sudo[94698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:10:17 compute-0 sudo[94698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:17 compute-0 sudo[94698]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:17 compute-0 sudo[94733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:17 compute-0 sudo[94733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:17 compute-0 sudo[94733]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct 01 13:10:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct 01 13:10:17 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct 01 13:10:17 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1231343553' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 01 13:10:17 compute-0 ceph-mon[74802]: osdmap e22: 3 total, 3 up, 3 in
Oct 01 13:10:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:17 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:17 compute-0 sudo[94767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:17 compute-0 sudo[94767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:17 compute-0 sudo[94767]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:10:17 compute-0 sudo[94792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:17 compute-0 sudo[94792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:17 compute-0 sudo[94792]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct 01 13:10:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3649487019' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 01 13:10:17 compute-0 sudo[94817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:10:17 compute-0 sudo[94817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:18 compute-0 podman[94916]: 2025-10-01 13:10:18.523525013 +0000 UTC m=+0.087811692 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:10:18 compute-0 podman[94916]: 2025-10-01 13:10:18.640102388 +0000 UTC m=+0.204388997 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct 01 13:10:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3649487019' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 01 13:10:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct 01 13:10:18 compute-0 unruffled_bardeen[94649]: enabled application 'rbd' on pool 'vms'
Oct 01 13:10:18 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct 01 13:10:18 compute-0 ceph-mon[74802]: pgmap v58: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:18 compute-0 ceph-mon[74802]: osdmap e23: 3 total, 3 up, 3 in
Oct 01 13:10:18 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3649487019' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 01 13:10:18 compute-0 systemd[1]: libpod-fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21.scope: Deactivated successfully.
Oct 01 13:10:18 compute-0 podman[94626]: 2025-10-01 13:10:18.767449606 +0000 UTC m=+1.653047322 container died fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-383ed642f3137dbcdc107e8cf21eb05f8ca1ce7fa63f87c013b2babe277d349e-merged.mount: Deactivated successfully.
Oct 01 13:10:18 compute-0 podman[94626]: 2025-10-01 13:10:18.814859384 +0000 UTC m=+1.700457100 container remove fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21 (image=quay.io/ceph/ceph:v18, name=unruffled_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:18 compute-0 systemd[1]: libpod-conmon-fe221d977a14efab4edd226c83bee69c03e9ea0a9c9be6c216e0ef16ddd7de21.scope: Deactivated successfully.
Oct 01 13:10:18 compute-0 sudo[94614]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:19 compute-0 sudo[95038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtrbolcyuoegyhklsbefjjqbarrbpvhl ; /usr/bin/python3'
Oct 01 13:10:19 compute-0 sudo[95038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:19 compute-0 python3[95043]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:19 compute-0 podman[95065]: 2025-10-01 13:10:19.206515607 +0000 UTC m=+0.057566854 container create ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:19 compute-0 sudo[94817]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:19 compute-0 systemd[1]: Started libpod-conmon-ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a.scope.
Oct 01 13:10:19 compute-0 podman[95065]: 2025-10-01 13:10:19.178103181 +0000 UTC m=+0.029154468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede26fc9a0804c8ebc090aa045b55e049428b729189073d45ff04a3689d1b338/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede26fc9a0804c8ebc090aa045b55e049428b729189073d45ff04a3689d1b338/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:19 compute-0 podman[95065]: 2025-10-01 13:10:19.30194531 +0000 UTC m=+0.152996587 container init ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:19 compute-0 podman[95065]: 2025-10-01 13:10:19.311619691 +0000 UTC m=+0.162670928 container start ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:19 compute-0 sudo[95094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:19 compute-0 podman[95065]: 2025-10-01 13:10:19.315782048 +0000 UTC m=+0.166833305 container attach ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:19 compute-0 sudo[95094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:19 compute-0 sudo[95094]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:19 compute-0 sudo[95121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:19 compute-0 sudo[95121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:19 compute-0 sudo[95121]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:19 compute-0 sudo[95146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:19 compute-0 sudo[95146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:19 compute-0 sudo[95146]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:19 compute-0 sudo[95171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:10:19 compute-0 sudo[95171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:19 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3649487019' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 01 13:10:19 compute-0 ceph-mon[74802]: osdmap e24: 3 total, 3 up, 3 in
Oct 01 13:10:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct 01 13:10:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/328615138' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 01 13:10:19 compute-0 sudo[95171]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:10:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:10:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:10:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 87983113-12b6-4c0a-829f-e0939610e618 does not exist
Oct 01 13:10:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f5823776-cfe4-4ef1-9732-8ed65589213f does not exist
Oct 01 13:10:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fd3765b9-64d2-4d47-8b90-d3fdd0320e1b does not exist
Oct 01 13:10:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:10:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:10:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:10:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:10:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:20 compute-0 sudo[95247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:20 compute-0 sudo[95247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:20 compute-0 sudo[95247]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:20 compute-0 sudo[95272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:20 compute-0 sudo[95272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:20 compute-0 sudo[95272]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:20 compute-0 sudo[95297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:20 compute-0 sudo[95297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:20 compute-0 sudo[95297]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:20 compute-0 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:10:20 compute-0 sudo[95322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:10:20 compute-0 sudo[95322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:20 compute-0 podman[95386]: 2025-10-01 13:10:20.601420886 +0000 UTC m=+0.046852124 container create 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct 01 13:10:20 compute-0 systemd[1]: Started libpod-conmon-4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f.scope.
Oct 01 13:10:20 compute-0 podman[95386]: 2025-10-01 13:10:20.580331485 +0000 UTC m=+0.025762723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:20 compute-0 podman[95386]: 2025-10-01 13:10:20.692993571 +0000 UTC m=+0.138424809 container init 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:10:20 compute-0 podman[95386]: 2025-10-01 13:10:20.700580783 +0000 UTC m=+0.146011981 container start 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:10:20 compute-0 hungry_ride[95402]: 167 167
Oct 01 13:10:20 compute-0 systemd[1]: libpod-4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f.scope: Deactivated successfully.
Oct 01 13:10:20 compute-0 podman[95386]: 2025-10-01 13:10:20.704606297 +0000 UTC m=+0.150037555 container attach 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:20 compute-0 podman[95386]: 2025-10-01 13:10:20.705690426 +0000 UTC m=+0.151121634 container died 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-409fbfc8faca09b15e36063b7a65c36634c84600a44616f236b5e471a9967712-merged.mount: Deactivated successfully.
Oct 01 13:10:20 compute-0 podman[95386]: 2025-10-01 13:10:20.750183873 +0000 UTC m=+0.195615081 container remove 4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ride, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:20 compute-0 systemd[1]: libpod-conmon-4d97afa6e8f8ecb71e42accf0f835a52edff00864d30a4111e47efb5a1b96b3f.scope: Deactivated successfully.
Oct 01 13:10:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct 01 13:10:20 compute-0 ceph-mon[74802]: pgmap v61: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:20 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/328615138' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 01 13:10:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:10:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:10:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:10:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:20 compute-0 ceph-mon[74802]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:10:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/328615138' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 01 13:10:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct 01 13:10:20 compute-0 relaxed_volhard[95092]: enabled application 'rbd' on pool 'volumes'
Oct 01 13:10:20 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct 01 13:10:20 compute-0 systemd[1]: libpod-ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a.scope: Deactivated successfully.
Oct 01 13:10:20 compute-0 podman[95065]: 2025-10-01 13:10:20.804582887 +0000 UTC m=+1.655634164 container died ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 13:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ede26fc9a0804c8ebc090aa045b55e049428b729189073d45ff04a3689d1b338-merged.mount: Deactivated successfully.
Oct 01 13:10:20 compute-0 podman[95065]: 2025-10-01 13:10:20.861511603 +0000 UTC m=+1.712562840 container remove ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a (image=quay.io/ceph/ceph:v18, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 13:10:20 compute-0 systemd[1]: libpod-conmon-ee02a3de24f01fa603615be977c89929be8c211d3c821ca008e5bb59391a544a.scope: Deactivated successfully.
Oct 01 13:10:20 compute-0 sudo[95038]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:20 compute-0 podman[95437]: 2025-10-01 13:10:20.946707699 +0000 UTC m=+0.056937506 container create 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:10:20 compute-0 systemd[1]: Started libpod-conmon-41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40.scope.
Oct 01 13:10:21 compute-0 podman[95437]: 2025-10-01 13:10:20.921335238 +0000 UTC m=+0.031565115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:21 compute-0 sudo[95479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlbvjzxjlvigllcshmrjvctcptretkup ; /usr/bin/python3'
Oct 01 13:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:21 compute-0 sudo[95479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:21 compute-0 podman[95437]: 2025-10-01 13:10:21.044593401 +0000 UTC m=+0.154823248 container init 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:10:21 compute-0 podman[95437]: 2025-10-01 13:10:21.057534964 +0000 UTC m=+0.167764761 container start 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 13:10:21 compute-0 podman[95437]: 2025-10-01 13:10:21.061360891 +0000 UTC m=+0.171590698 container attach 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:10:21 compute-0 python3[95482]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:21 compute-0 podman[95485]: 2025-10-01 13:10:21.275536262 +0000 UTC m=+0.051415322 container create 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:21 compute-0 systemd[1]: Started libpod-conmon-40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23.scope.
Oct 01 13:10:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34e7f57e9b8ba021c373e3e23e870e00a71877898f59ddfbcf9bbdc181a03fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34e7f57e9b8ba021c373e3e23e870e00a71877898f59ddfbcf9bbdc181a03fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:21 compute-0 podman[95485]: 2025-10-01 13:10:21.259184593 +0000 UTC m=+0.035063673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:21 compute-0 podman[95485]: 2025-10-01 13:10:21.370547203 +0000 UTC m=+0.146426263 container init 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:10:21 compute-0 podman[95485]: 2025-10-01 13:10:21.375932054 +0000 UTC m=+0.151811124 container start 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:10:21 compute-0 podman[95485]: 2025-10-01 13:10:21.378876797 +0000 UTC m=+0.154755867 container attach 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:10:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:21 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/328615138' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 01 13:10:21 compute-0 ceph-mon[74802]: osdmap e25: 3 total, 3 up, 3 in
Oct 01 13:10:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct 01 13:10:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1460613785' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 01 13:10:22 compute-0 wizardly_mendel[95475]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:10:22 compute-0 wizardly_mendel[95475]: --> relative data size: 1.0
Oct 01 13:10:22 compute-0 wizardly_mendel[95475]: --> All data devices are unavailable
Oct 01 13:10:22 compute-0 systemd[1]: libpod-41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40.scope: Deactivated successfully.
Oct 01 13:10:22 compute-0 systemd[1]: libpod-41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40.scope: Consumed 1.034s CPU time.
Oct 01 13:10:22 compute-0 podman[95548]: 2025-10-01 13:10:22.177288745 +0000 UTC m=+0.027990036 container died 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 01 13:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bafcd02a64153a94e76658da86b229d7f8b92a9eef3a24c79acfee93e045cd2b-merged.mount: Deactivated successfully.
Oct 01 13:10:22 compute-0 podman[95548]: 2025-10-01 13:10:22.229758084 +0000 UTC m=+0.080459355 container remove 41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:22 compute-0 systemd[1]: libpod-conmon-41834164a224c22eb2b26f8e50ad29c98f67ffdb6a54f8b96a0aa7b6dff39e40.scope: Deactivated successfully.
Oct 01 13:10:22 compute-0 sudo[95322]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:22 compute-0 sudo[95563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:22 compute-0 sudo[95563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:22 compute-0 sudo[95563]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:22 compute-0 sudo[95588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:22 compute-0 sudo[95588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:22 compute-0 sudo[95588]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:22 compute-0 sudo[95613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:22 compute-0 sudo[95613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:22 compute-0 sudo[95613]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:22 compute-0 sudo[95638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:10:22 compute-0 sudo[95638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct 01 13:10:22 compute-0 ceph-mon[74802]: pgmap v63: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1460613785' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 01 13:10:22 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1460613785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 01 13:10:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct 01 13:10:22 compute-0 lucid_williams[95500]: enabled application 'rbd' on pool 'backups'
Oct 01 13:10:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct 01 13:10:22 compute-0 systemd[1]: libpod-40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23.scope: Deactivated successfully.
Oct 01 13:10:22 compute-0 podman[95485]: 2025-10-01 13:10:22.839415894 +0000 UTC m=+1.615294994 container died 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a34e7f57e9b8ba021c373e3e23e870e00a71877898f59ddfbcf9bbdc181a03fe-merged.mount: Deactivated successfully.
Oct 01 13:10:22 compute-0 podman[95485]: 2025-10-01 13:10:22.910911927 +0000 UTC m=+1.686790997 container remove 40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23 (image=quay.io/ceph/ceph:v18, name=lucid_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:22 compute-0 systemd[1]: libpod-conmon-40dc41f08767750010b8081fb13e0ebbb9297a6b1fea498fef30478056723a23.scope: Deactivated successfully.
Oct 01 13:10:22 compute-0 sudo[95479]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:23 compute-0 podman[95715]: 2025-10-01 13:10:23.036808454 +0000 UTC m=+0.069712123 container create fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:23 compute-0 systemd[1]: Started libpod-conmon-fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52.scope.
Oct 01 13:10:23 compute-0 podman[95715]: 2025-10-01 13:10:23.008169062 +0000 UTC m=+0.041072791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:23 compute-0 sudo[95757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oruxrvsuzehonwqwfkizkolfbwhqmokk ; /usr/bin/python3'
Oct 01 13:10:23 compute-0 sudo[95757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:23 compute-0 podman[95715]: 2025-10-01 13:10:23.129523792 +0000 UTC m=+0.162427461 container init fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:10:23 compute-0 podman[95715]: 2025-10-01 13:10:23.139591644 +0000 UTC m=+0.172495283 container start fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:23 compute-0 podman[95715]: 2025-10-01 13:10:23.143336769 +0000 UTC m=+0.176240408 container attach fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:10:23 compute-0 peaceful_cori[95751]: 167 167
Oct 01 13:10:23 compute-0 systemd[1]: libpod-fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52.scope: Deactivated successfully.
Oct 01 13:10:23 compute-0 podman[95715]: 2025-10-01 13:10:23.147932667 +0000 UTC m=+0.180836306 container died fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-25a77614ccb1a56593d58ff0dbb89205320cc0fffc0f424879ad52ca97696da9-merged.mount: Deactivated successfully.
Oct 01 13:10:23 compute-0 podman[95715]: 2025-10-01 13:10:23.19048439 +0000 UTC m=+0.223388029 container remove fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:23 compute-0 systemd[1]: libpod-conmon-fa32f77d862dcd56db9d121b8ecb09908750b3e4264838096ea15b806550cf52.scope: Deactivated successfully.
Oct 01 13:10:23 compute-0 python3[95759]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:23 compute-0 podman[95777]: 2025-10-01 13:10:23.393808605 +0000 UTC m=+0.079707163 container create b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:23 compute-0 podman[95777]: 2025-10-01 13:10:23.339691179 +0000 UTC m=+0.025589717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:23 compute-0 podman[95785]: 2025-10-01 13:10:23.439049933 +0000 UTC m=+0.111953947 container create af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:10:23 compute-0 systemd[1]: Started libpod-conmon-b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca.scope.
Oct 01 13:10:23 compute-0 podman[95785]: 2025-10-01 13:10:23.35720052 +0000 UTC m=+0.030104534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176131b6e05374f4257997c5f5f93b0d63c3890a0ebddbcc24e029ec34140c6d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176131b6e05374f4257997c5f5f93b0d63c3890a0ebddbcc24e029ec34140c6d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:23 compute-0 systemd[1]: Started libpod-conmon-af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a.scope.
Oct 01 13:10:23 compute-0 podman[95777]: 2025-10-01 13:10:23.504679552 +0000 UTC m=+0.190578090 container init b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:23 compute-0 podman[95777]: 2025-10-01 13:10:23.51745613 +0000 UTC m=+0.203354658 container start b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:23 compute-0 podman[95777]: 2025-10-01 13:10:23.521870713 +0000 UTC m=+0.207769231 container attach b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:10:23 compute-0 podman[95785]: 2025-10-01 13:10:23.532227434 +0000 UTC m=+0.205131478 container init af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:23 compute-0 podman[95785]: 2025-10-01 13:10:23.545523436 +0000 UTC m=+0.218427410 container start af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:10:23 compute-0 podman[95785]: 2025-10-01 13:10:23.548915881 +0000 UTC m=+0.221819895 container attach af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 01 13:10:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:23 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1460613785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 01 13:10:23 compute-0 ceph-mon[74802]: osdmap e26: 3 total, 3 up, 3 in
Oct 01 13:10:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct 01 13:10:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2650268208' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 01 13:10:24 compute-0 priceless_lewin[95813]: {
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:     "0": [
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:         {
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "devices": [
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "/dev/loop3"
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             ],
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_name": "ceph_lv0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_size": "21470642176",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "name": "ceph_lv0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "tags": {
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.crush_device_class": "",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.encrypted": "0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.osd_id": "0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.type": "block",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.vdo": "0"
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             },
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "type": "block",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "vg_name": "ceph_vg0"
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:         }
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:     ],
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:     "1": [
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:         {
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "devices": [
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "/dev/loop4"
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             ],
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_name": "ceph_lv1",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_size": "21470642176",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "name": "ceph_lv1",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "tags": {
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.crush_device_class": "",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.encrypted": "0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.osd_id": "1",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.type": "block",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.vdo": "0"
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             },
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "type": "block",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "vg_name": "ceph_vg1"
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:         }
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:     ],
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:     "2": [
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:         {
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "devices": [
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "/dev/loop5"
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             ],
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_name": "ceph_lv2",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_size": "21470642176",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "name": "ceph_lv2",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "tags": {
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.crush_device_class": "",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.encrypted": "0",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.osd_id": "2",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.type": "block",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:                 "ceph.vdo": "0"
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             },
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "type": "block",
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:             "vg_name": "ceph_vg2"
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:         }
Oct 01 13:10:24 compute-0 priceless_lewin[95813]:     ]
Oct 01 13:10:24 compute-0 priceless_lewin[95813]: }
Oct 01 13:10:24 compute-0 systemd[1]: libpod-af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a.scope: Deactivated successfully.
Oct 01 13:10:24 compute-0 podman[95785]: 2025-10-01 13:10:24.277943065 +0000 UTC m=+0.950847069 container died af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-06464dec8ecda13957a9ec6d3b413d3d0e87be9afa10f362bc77912197155a11-merged.mount: Deactivated successfully.
Oct 01 13:10:24 compute-0 podman[95785]: 2025-10-01 13:10:24.344247393 +0000 UTC m=+1.017151397 container remove af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:24 compute-0 systemd[1]: libpod-conmon-af5106334232a5b7b95632fc271ba5aebf2788deeb95a5a1f3e9fd32f1989e4a.scope: Deactivated successfully.
Oct 01 13:10:24 compute-0 sudo[95638]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:24 compute-0 sudo[95858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:24 compute-0 sudo[95858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:24 compute-0 sudo[95858]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:24 compute-0 sudo[95885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:24 compute-0 sudo[95885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:24 compute-0 sudo[95885]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:24 compute-0 sudo[95910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:24 compute-0 sudo[95910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:24 compute-0 sudo[95910]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:24 compute-0 sudo[95935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:10:24 compute-0 sudo[95935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct 01 13:10:24 compute-0 ceph-mon[74802]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:24 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2650268208' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 01 13:10:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2650268208' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 01 13:10:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct 01 13:10:24 compute-0 zealous_sinoussi[95808]: enabled application 'rbd' on pool 'images'
Oct 01 13:10:24 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct 01 13:10:25 compute-0 systemd[1]: libpod-b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca.scope: Deactivated successfully.
Oct 01 13:10:25 compute-0 podman[95777]: 2025-10-01 13:10:25.005724284 +0000 UTC m=+1.691622842 container died b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 13:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-176131b6e05374f4257997c5f5f93b0d63c3890a0ebddbcc24e029ec34140c6d-merged.mount: Deactivated successfully.
Oct 01 13:10:25 compute-0 podman[95777]: 2025-10-01 13:10:25.070354235 +0000 UTC m=+1.756252793 container remove b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca (image=quay.io/ceph/ceph:v18, name=zealous_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:25 compute-0 podman[96001]: 2025-10-01 13:10:25.075879199 +0000 UTC m=+0.082867482 container create 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:10:25 compute-0 systemd[1]: libpod-conmon-b883b701717dc84c9ae232199017fdc418305d359c752d792ce35769028373ca.scope: Deactivated successfully.
Oct 01 13:10:25 compute-0 sudo[95757]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:25 compute-0 systemd[1]: Started libpod-conmon-9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0.scope.
Oct 01 13:10:25 compute-0 podman[96001]: 2025-10-01 13:10:25.025663433 +0000 UTC m=+0.032651766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:25 compute-0 podman[96001]: 2025-10-01 13:10:25.243792293 +0000 UTC m=+0.250780576 container init 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:25 compute-0 podman[96001]: 2025-10-01 13:10:25.254374781 +0000 UTC m=+0.261363034 container start 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:25 compute-0 affectionate_poitras[96029]: 167 167
Oct 01 13:10:25 compute-0 systemd[1]: libpod-9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0.scope: Deactivated successfully.
Oct 01 13:10:25 compute-0 sudo[96055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrgggrnyrkonuuvjkpeimlxonjutcjjb ; /usr/bin/python3'
Oct 01 13:10:25 compute-0 sudo[96055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:25 compute-0 podman[96001]: 2025-10-01 13:10:25.340269987 +0000 UTC m=+0.347258270 container attach 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:25 compute-0 podman[96001]: 2025-10-01 13:10:25.341362087 +0000 UTC m=+0.348350400 container died 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0491385ffe27c5b81cbfdae6ff9e59f8f0e5133a97fc1c704adf0180782cff2c-merged.mount: Deactivated successfully.
Oct 01 13:10:25 compute-0 podman[96001]: 2025-10-01 13:10:25.401697848 +0000 UTC m=+0.408686121 container remove 9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_poitras, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:25 compute-0 systemd[1]: libpod-conmon-9ba82bcfe60a929203cbd459f460cd44284ab7112e2bcd5205bc01175316bef0.scope: Deactivated successfully.
Oct 01 13:10:25 compute-0 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:10:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:25 compute-0 python3[96060]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:25 compute-0 podman[96076]: 2025-10-01 13:10:25.586902756 +0000 UTC m=+0.098843860 container create a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:25 compute-0 podman[96076]: 2025-10-01 13:10:25.535883407 +0000 UTC m=+0.047824551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:25 compute-0 systemd[1]: Started libpod-conmon-a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc.scope.
Oct 01 13:10:25 compute-0 podman[96094]: 2025-10-01 13:10:25.676218109 +0000 UTC m=+0.097308857 container create 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:10:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a805ce5f7208cb547f66bfabf08a26109425c1235bfcb4aef1c3663d096e837d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a805ce5f7208cb547f66bfabf08a26109425c1235bfcb4aef1c3663d096e837d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:25 compute-0 systemd[1]: Started libpod-conmon-1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469.scope.
Oct 01 13:10:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:25 compute-0 podman[96094]: 2025-10-01 13:10:25.647868204 +0000 UTC m=+0.068959052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:25 compute-0 podman[96094]: 2025-10-01 13:10:25.74052446 +0000 UTC m=+0.161615258 container init 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:10:25 compute-0 podman[96076]: 2025-10-01 13:10:25.743808792 +0000 UTC m=+0.255749876 container init a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:25 compute-0 podman[96076]: 2025-10-01 13:10:25.750616522 +0000 UTC m=+0.262557596 container start a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:25 compute-0 podman[96076]: 2025-10-01 13:10:25.754880452 +0000 UTC m=+0.266821546 container attach a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:10:25 compute-0 podman[96094]: 2025-10-01 13:10:25.76371086 +0000 UTC m=+0.184801648 container start 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:10:25 compute-0 podman[96094]: 2025-10-01 13:10:25.768497614 +0000 UTC m=+0.189588452 container attach 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:10:25 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2650268208' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 01 13:10:25 compute-0 ceph-mon[74802]: osdmap e27: 3 total, 3 up, 3 in
Oct 01 13:10:25 compute-0 ceph-mon[74802]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:10:25 compute-0 sshd-session[95866]: Received disconnect from 27.254.137.144 port 34880:11: Bye Bye [preauth]
Oct 01 13:10:25 compute-0 sshd-session[95866]: Disconnected from authenticating user root 27.254.137.144 port 34880 [preauth]
Oct 01 13:10:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct 01 13:10:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042548635' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]: {
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "osd_id": 0,
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "type": "bluestore"
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:     },
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "osd_id": 2,
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "type": "bluestore"
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:     },
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "osd_id": 1,
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:         "type": "bluestore"
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]:     }
Oct 01 13:10:26 compute-0 friendly_mestorf[96115]: }
Oct 01 13:10:26 compute-0 systemd[1]: libpod-1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469.scope: Deactivated successfully.
Oct 01 13:10:26 compute-0 podman[96094]: 2025-10-01 13:10:26.868143621 +0000 UTC m=+1.289234409 container died 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:26 compute-0 systemd[1]: libpod-1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469.scope: Consumed 1.115s CPU time.
Oct 01 13:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1a7245f6628d825c2d7d019b66aa34416de3dce002a1222dd823a77223c7d31-merged.mount: Deactivated successfully.
Oct 01 13:10:26 compute-0 podman[96094]: 2025-10-01 13:10:26.940434396 +0000 UTC m=+1.361525184 container remove 1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 13:10:26 compute-0 systemd[1]: libpod-conmon-1c0fa0c614292cb4290881f5419bbb587253d79247ddabf0c6d72b38b78b0469.scope: Deactivated successfully.
Oct 01 13:10:26 compute-0 sudo[95935]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct 01 13:10:27 compute-0 ceph-mon[74802]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:27 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2042548635' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 01 13:10:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042548635' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 01 13:10:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct 01 13:10:27 compute-0 determined_hoover[96110]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct 01 13:10:27 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct 01 13:10:27 compute-0 systemd[1]: libpod-a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc.scope: Deactivated successfully.
Oct 01 13:10:27 compute-0 podman[96076]: 2025-10-01 13:10:27.048844034 +0000 UTC m=+1.560785128 container died a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:27 compute-0 sudo[96180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:27 compute-0 sudo[96180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:27 compute-0 sudo[96180]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a805ce5f7208cb547f66bfabf08a26109425c1235bfcb4aef1c3663d096e837d-merged.mount: Deactivated successfully.
Oct 01 13:10:27 compute-0 sudo[96218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:10:27 compute-0 sudo[96218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:27 compute-0 sudo[96218]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:27 compute-0 podman[96076]: 2025-10-01 13:10:27.198184557 +0000 UTC m=+1.710125631 container remove a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc (image=quay.io/ceph/ceph:v18, name=determined_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:10:27 compute-0 systemd[1]: libpod-conmon-a33e89e902bc37a913c446e7456180bca6c354c385d8a217280fc52b4be472bc.scope: Deactivated successfully.
Oct 01 13:10:27 compute-0 sudo[96055]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:27 compute-0 sudo[96266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtjueysjbmqoviilozrrggfzczmowsxb ; /usr/bin/python3'
Oct 01 13:10:27 compute-0 sudo[96266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:27 compute-0 python3[96268]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:27 compute-0 podman[96269]: 2025-10-01 13:10:27.640685834 +0000 UTC m=+0.022801040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:27 compute-0 podman[96269]: 2025-10-01 13:10:27.750607893 +0000 UTC m=+0.132723139 container create c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 13:10:27 compute-0 systemd[1]: Started libpod-conmon-c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1.scope.
Oct 01 13:10:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954e3846d215df75906e7da133f9234c707583d7529d7fa3167b22dd8c96b5ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954e3846d215df75906e7da133f9234c707583d7529d7fa3167b22dd8c96b5ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:27 compute-0 podman[96269]: 2025-10-01 13:10:27.901683836 +0000 UTC m=+0.283799112 container init c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:10:27 compute-0 podman[96269]: 2025-10-01 13:10:27.912404586 +0000 UTC m=+0.294519832 container start c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:10:27 compute-0 podman[96269]: 2025-10-01 13:10:27.921151631 +0000 UTC m=+0.303266847 container attach c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:10:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:28 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2042548635' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 01 13:10:28 compute-0 ceph-mon[74802]: osdmap e28: 3 total, 3 up, 3 in
Oct 01 13:10:28 compute-0 sshd[1010]: Timeout before authentication for connection from 202.103.55.158 to 38.102.83.245, pid = 75697
Oct 01 13:10:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct 01 13:10:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1497231379' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 01 13:10:28 compute-0 sshd-session[96289]: Invalid user seekcy from 80.253.31.232 port 37404
Oct 01 13:10:28 compute-0 sshd-session[96289]: Received disconnect from 80.253.31.232 port 37404:11: Bye Bye [preauth]
Oct 01 13:10:28 compute-0 sshd-session[96289]: Disconnected from invalid user seekcy 80.253.31.232 port 37404 [preauth]
Oct 01 13:10:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct 01 13:10:29 compute-0 ceph-mon[74802]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:29 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1497231379' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 01 13:10:29 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1497231379' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 01 13:10:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct 01 13:10:29 compute-0 clever_khayyam[96285]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct 01 13:10:29 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct 01 13:10:29 compute-0 systemd[1]: libpod-c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1.scope: Deactivated successfully.
Oct 01 13:10:29 compute-0 podman[96269]: 2025-10-01 13:10:29.300537776 +0000 UTC m=+1.682653012 container died c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-954e3846d215df75906e7da133f9234c707583d7529d7fa3167b22dd8c96b5ed-merged.mount: Deactivated successfully.
Oct 01 13:10:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:29 compute-0 podman[96269]: 2025-10-01 13:10:29.875663718 +0000 UTC m=+2.257778964 container remove c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1 (image=quay.io/ceph/ceph:v18, name=clever_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:29 compute-0 systemd[1]: libpod-conmon-c2587b4608164a5c94da01a1afb93400079694daab26ed1d315ca487ea7949b1.scope: Deactivated successfully.
Oct 01 13:10:29 compute-0 sudo[96266]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:30 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 01 13:10:30 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 01 13:10:30 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1497231379' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 01 13:10:30 compute-0 ceph-mon[74802]: osdmap e29: 3 total, 3 up, 3 in
Oct 01 13:10:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:30 compute-0 python3[96401]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 13:10:31 compute-0 python3[96472]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324230.557655-33861-14683110072787/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:10:31 compute-0 ceph-mon[74802]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:31 compute-0 ceph-mon[74802]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 01 13:10:31 compute-0 ceph-mon[74802]: Cluster is now healthy
Oct 01 13:10:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:31 compute-0 sudo[96572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgbnmufpgikyusndxktbockxokydiztq ; /usr/bin/python3'
Oct 01 13:10:31 compute-0 sudo[96572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:32 compute-0 python3[96574]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 13:10:32 compute-0 sudo[96572]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:32 compute-0 sudo[96647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpbftgcbbswgingbcepnlrriopicfquc ; /usr/bin/python3'
Oct 01 13:10:32 compute-0 sudo[96647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:32 compute-0 python3[96649]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324231.6416728-33875-256645427039545/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=897ffa25907ca0d218e2daaa59ac7825cb09ab42 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:10:32 compute-0 sudo[96647]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:32 compute-0 sudo[96697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-memglaouruvxfglijhlqnawtqxxizpmo ; /usr/bin/python3'
Oct 01 13:10:32 compute-0 sudo[96697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:32 compute-0 python3[96699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:33 compute-0 podman[96700]: 2025-10-01 13:10:33.011876911 +0000 UTC m=+0.119773228 container create 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 13:10:33 compute-0 podman[96700]: 2025-10-01 13:10:32.929866873 +0000 UTC m=+0.037763170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:33 compute-0 systemd[1]: Started libpod-conmon-042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e.scope.
Oct 01 13:10:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864e3d1a8a4fb910e762aecbb9e382f4498e2835aab8c656eaec4c4ca35c6d88/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864e3d1a8a4fb910e762aecbb9e382f4498e2835aab8c656eaec4c4ca35c6d88/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864e3d1a8a4fb910e762aecbb9e382f4498e2835aab8c656eaec4c4ca35c6d88/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:33 compute-0 podman[96700]: 2025-10-01 13:10:33.235913237 +0000 UTC m=+0.343809604 container init 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:10:33 compute-0 podman[96700]: 2025-10-01 13:10:33.244686272 +0000 UTC m=+0.352582579 container start 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:33 compute-0 podman[96700]: 2025-10-01 13:10:33.36488708 +0000 UTC m=+0.472783457 container attach 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:33 compute-0 ceph-mon[74802]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct 01 13:10:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1902963374' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 13:10:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1902963374' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 01 13:10:33 compute-0 musing_germain[96716]: 
Oct 01 13:10:33 compute-0 musing_germain[96716]: [global]
Oct 01 13:10:33 compute-0 musing_germain[96716]:         fsid = eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:10:33 compute-0 musing_germain[96716]:         mon_host = 192.168.122.100
Oct 01 13:10:33 compute-0 systemd[1]: libpod-042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e.scope: Deactivated successfully.
Oct 01 13:10:33 compute-0 podman[96700]: 2025-10-01 13:10:33.881249877 +0000 UTC m=+0.989146184 container died 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:10:33 compute-0 sudo[96741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:33 compute-0 sudo[96741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:33 compute-0 sudo[96741]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-864e3d1a8a4fb910e762aecbb9e382f4498e2835aab8c656eaec4c4ca35c6d88-merged.mount: Deactivated successfully.
Oct 01 13:10:34 compute-0 sudo[96778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:34 compute-0 sudo[96778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:34 compute-0 sudo[96778]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:34 compute-0 sudo[96803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:34 compute-0 sudo[96803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:34 compute-0 sudo[96803]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:34 compute-0 podman[96700]: 2025-10-01 13:10:34.164130472 +0000 UTC m=+1.272026749 container remove 042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e (image=quay.io/ceph/ceph:v18, name=musing_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:10:34 compute-0 systemd[1]: libpod-conmon-042f2e984693266d678296efb8c8b0b2275b7c1bb47d794eb01afe063231bb5e.scope: Deactivated successfully.
Oct 01 13:10:34 compute-0 sudo[96697]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:34 compute-0 sudo[96828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:10:34 compute-0 sudo[96828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:34 compute-0 sudo[96895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuktmftsybtbnrclpppbvetgzdjpypzq ; /usr/bin/python3'
Oct 01 13:10:34 compute-0 sudo[96895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:34 compute-0 python3[96904]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:34 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1902963374' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 01 13:10:34 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1902963374' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 01 13:10:34 compute-0 podman[96937]: 2025-10-01 13:10:34.658636895 +0000 UTC m=+0.035478345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:34 compute-0 podman[96937]: 2025-10-01 13:10:34.834540994 +0000 UTC m=+0.211382384 container create 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:35 compute-0 systemd[1]: Started libpod-conmon-3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80.scope.
Oct 01 13:10:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b4236ebf45e3dd48fc41fc500e4712c2cdabe8850cf88ec18e8ab57dfd2abe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b4236ebf45e3dd48fc41fc500e4712c2cdabe8850cf88ec18e8ab57dfd2abe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b4236ebf45e3dd48fc41fc500e4712c2cdabe8850cf88ec18e8ab57dfd2abe/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:35 compute-0 podman[96963]: 2025-10-01 13:10:35.403236685 +0000 UTC m=+0.686463822 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 13:10:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:35 compute-0 podman[96937]: 2025-10-01 13:10:35.556503529 +0000 UTC m=+0.933344969 container init 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:35 compute-0 podman[96963]: 2025-10-01 13:10:35.564160674 +0000 UTC m=+0.847387771 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:35 compute-0 podman[96937]: 2025-10-01 13:10:35.570842441 +0000 UTC m=+0.947683831 container start 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:10:35 compute-0 ceph-mon[74802]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:35 compute-0 podman[96937]: 2025-10-01 13:10:35.799315502 +0000 UTC m=+1.176156892 container attach 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct 01 13:10:36 compute-0 sudo[96828]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3256476136' entity='client.admin' 
Oct 01 13:10:36 compute-0 trusting_buck[96982]: set ssl_option
Oct 01 13:10:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:36 compute-0 podman[96937]: 2025-10-01 13:10:36.335718459 +0000 UTC m=+1.712559819 container died 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:36 compute-0 systemd[1]: libpod-3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80.scope: Deactivated successfully.
Oct 01 13:10:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:36 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:10:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:10:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:10:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-08b4236ebf45e3dd48fc41fc500e4712c2cdabe8850cf88ec18e8ab57dfd2abe-merged.mount: Deactivated successfully.
Oct 01 13:10:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:36 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a037598a-2cc6-437a-a366-dfdb0a649ade does not exist
Oct 01 13:10:36 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev d1f1b12b-645f-4ff5-9f21-c228683686cf does not exist
Oct 01 13:10:36 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5c95ccf9-1e6f-493a-ae24-428ecf736505 does not exist
Oct 01 13:10:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:10:36 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:10:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:10:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:10:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:36 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:36 compute-0 sudo[97129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:36 compute-0 sudo[97129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:36 compute-0 sudo[97129]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:37 compute-0 sudo[97154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:37 compute-0 sudo[97154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:37 compute-0 sudo[97154]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:37 compute-0 sudo[97179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:37 compute-0 sudo[97179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:37 compute-0 sudo[97179]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:37 compute-0 sudo[97204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:10:37 compute-0 sudo[97204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:37 compute-0 podman[96937]: 2025-10-01 13:10:37.245082259 +0000 UTC m=+2.621923609 container remove 3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80 (image=quay.io/ceph/ceph:v18, name=trusting_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:10:37 compute-0 sudo[96895]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:37 compute-0 systemd[1]: libpod-conmon-3510c8869ee636b5d3d5c35f10ba43df4ea7face350445e713ab64d6cf969d80.scope: Deactivated successfully.
Oct 01 13:10:37 compute-0 sudo[97287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtujrolfgclzptvdamdzflfinjnfcrxx ; /usr/bin/python3'
Oct 01 13:10:37 compute-0 sudo[97287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:37 compute-0 ceph-mon[74802]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:37 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3256476136' entity='client.admin' 
Oct 01 13:10:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:10:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:10:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:10:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:37 compute-0 podman[97294]: 2025-10-01 13:10:37.574415904 +0000 UTC m=+0.078663158 container create abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:37 compute-0 podman[97294]: 2025-10-01 13:10:37.517294773 +0000 UTC m=+0.021542107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:37 compute-0 python3[97289]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:37 compute-0 systemd[1]: Started libpod-conmon-abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60.scope.
Oct 01 13:10:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:37 compute-0 podman[97308]: 2025-10-01 13:10:37.724574319 +0000 UTC m=+0.038948017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:37 compute-0 podman[97294]: 2025-10-01 13:10:37.859291775 +0000 UTC m=+0.363539099 container init abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:37 compute-0 podman[97294]: 2025-10-01 13:10:37.871411824 +0000 UTC m=+0.375659058 container start abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:37 compute-0 gracious_mccarthy[97322]: 167 167
Oct 01 13:10:37 compute-0 systemd[1]: libpod-abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60.scope: Deactivated successfully.
Oct 01 13:10:37 compute-0 podman[97294]: 2025-10-01 13:10:37.972219906 +0000 UTC m=+0.476467170 container attach abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:10:37 compute-0 podman[97294]: 2025-10-01 13:10:37.972535586 +0000 UTC m=+0.476782820 container died abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:38 compute-0 podman[97308]: 2025-10-01 13:10:38.225359709 +0000 UTC m=+0.539733397 container create c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:10:38 compute-0 systemd[1]: Started libpod-conmon-c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1.scope.
Oct 01 13:10:38 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417afac3aeeec46aa48dbe95efd123c7b9a6c66f3decf76af1e8fb6b5ab8d66e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417afac3aeeec46aa48dbe95efd123c7b9a6c66f3decf76af1e8fb6b5ab8d66e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417afac3aeeec46aa48dbe95efd123c7b9a6c66f3decf76af1e8fb6b5ab8d66e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-be3c959f10c45a45791b9a8e862a0109016ba0ce91054c95fefd280c7f7da204-merged.mount: Deactivated successfully.
Oct 01 13:10:38 compute-0 podman[97294]: 2025-10-01 13:10:38.795947066 +0000 UTC m=+1.300194360 container remove abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mccarthy, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:10:39 compute-0 podman[97308]: 2025-10-01 13:10:39.040609441 +0000 UTC m=+1.354983139 container init c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:39 compute-0 podman[97308]: 2025-10-01 13:10:39.051993418 +0000 UTC m=+1.366367106 container start c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:10:39 compute-0 podman[97308]: 2025-10-01 13:10:39.087445168 +0000 UTC m=+1.401818836 container attach c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:39 compute-0 podman[97351]: 2025-10-01 13:10:39.061447426 +0000 UTC m=+0.092556941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:39 compute-0 podman[97351]: 2025-10-01 13:10:39.174536052 +0000 UTC m=+0.205645587 container create df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:10:39 compute-0 systemd[1]: Started libpod-conmon-df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602.scope.
Oct 01 13:10:39 compute-0 systemd[1]: libpod-conmon-abdb39ebc5a5cd5ec407cd37d8a579a1e2ffc72eff8cebdb7b2636742b84ca60.scope: Deactivated successfully.
Oct 01 13:10:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:39 compute-0 podman[97351]: 2025-10-01 13:10:39.351063931 +0000 UTC m=+0.382173526 container init df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 13:10:39 compute-0 podman[97351]: 2025-10-01 13:10:39.361608162 +0000 UTC m=+0.392717667 container start df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 13:10:39 compute-0 podman[97351]: 2025-10-01 13:10:39.378666012 +0000 UTC m=+0.409775597 container attach df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:39 compute-0 ceph-mon[74802]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:39 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:10:39 compute-0 ceph-mgr[75103]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct 01 13:10:39 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 01 13:10:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 01 13:10:39 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:39 compute-0 ecstatic_stonebraker[97341]: Scheduled rgw.rgw update...
Oct 01 13:10:39 compute-0 systemd[1]: libpod-c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1.scope: Deactivated successfully.
Oct 01 13:10:39 compute-0 podman[97308]: 2025-10-01 13:10:39.685866043 +0000 UTC m=+2.000239721 container died c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-417afac3aeeec46aa48dbe95efd123c7b9a6c66f3decf76af1e8fb6b5ab8d66e-merged.mount: Deactivated successfully.
Oct 01 13:10:40 compute-0 podman[97308]: 2025-10-01 13:10:40.217565315 +0000 UTC m=+2.531939003 container remove c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1 (image=quay.io/ceph/ceph:v18, name=ecstatic_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 01 13:10:40 compute-0 systemd[1]: libpod-conmon-c751374210ced5d181b4655357e19b32e94208049c2fcf65d5b396a47eeedfd1.scope: Deactivated successfully.
Oct 01 13:10:40 compute-0 sudo[97287]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:40 compute-0 priceless_leavitt[97369]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:10:40 compute-0 priceless_leavitt[97369]: --> relative data size: 1.0
Oct 01 13:10:40 compute-0 priceless_leavitt[97369]: --> All data devices are unavailable
Oct 01 13:10:40 compute-0 systemd[1]: libpod-df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602.scope: Deactivated successfully.
Oct 01 13:10:40 compute-0 podman[97351]: 2025-10-01 13:10:40.416741524 +0000 UTC m=+1.447851019 container died df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:40 compute-0 ceph-mon[74802]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:10:40 compute-0 ceph-mon[74802]: Saving service rgw.rgw spec with placement compute-0
Oct 01 13:10:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:40 compute-0 ceph-mon[74802]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c6ecc0111c92ddc719ec2fa5a9d83346cb3e31a350f58d0358a3460b824f2c0-merged.mount: Deactivated successfully.
Oct 01 13:10:41 compute-0 podman[97351]: 2025-10-01 13:10:41.068519185 +0000 UTC m=+2.099628690 container remove df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_leavitt, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:10:41 compute-0 systemd[1]: libpod-conmon-df5cbf9966d008983ec9c497e60da91e7f4abe2d08ca0c8ff71a4b7044f79602.scope: Deactivated successfully.
Oct 01 13:10:41 compute-0 sudo[97204]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:41 compute-0 sudo[97519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:41 compute-0 sudo[97519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:41 compute-0 sudo[97519]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:41 compute-0 python3[97518]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 13:10:41 compute-0 sudo[97544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:41 compute-0 sudo[97544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:41 compute-0 sudo[97544]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:41 compute-0 sudo[97570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:41 compute-0 sudo[97570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:41 compute-0 sudo[97570]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:41 compute-0 sudo[97617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:10:41 compute-0 sudo[97617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:41 compute-0 python3[97689]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324240.9658973-33916-121971589978421/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:10:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:41 compute-0 podman[97753]: 2025-10-01 13:10:41.780410726 +0000 UTC m=+0.066778416 container create caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:41 compute-0 systemd[1]: Started libpod-conmon-caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e.scope.
Oct 01 13:10:41 compute-0 podman[97753]: 2025-10-01 13:10:41.752947619 +0000 UTC m=+0.039315349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:41 compute-0 podman[97753]: 2025-10-01 13:10:41.869398018 +0000 UTC m=+0.155765688 container init caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:10:41 compute-0 podman[97753]: 2025-10-01 13:10:41.875628167 +0000 UTC m=+0.161995817 container start caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:41 compute-0 podman[97753]: 2025-10-01 13:10:41.879742303 +0000 UTC m=+0.166109983 container attach caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 13:10:41 compute-0 affectionate_kowalevski[97769]: 167 167
Oct 01 13:10:41 compute-0 systemd[1]: libpod-caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e.scope: Deactivated successfully.
Oct 01 13:10:41 compute-0 podman[97753]: 2025-10-01 13:10:41.880634621 +0000 UTC m=+0.167002271 container died caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d77f45d999d3097a957b1249aebb13d063785b0e635a24e8609aa8b46337d40-merged.mount: Deactivated successfully.
Oct 01 13:10:41 compute-0 sudo[97798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mssijvrfovcsjnhkqxmfqyxdaujljjlc ; /usr/bin/python3'
Oct 01 13:10:41 compute-0 sudo[97798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:41 compute-0 podman[97753]: 2025-10-01 13:10:41.910089578 +0000 UTC m=+0.196457218 container remove caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:10:41 compute-0 systemd[1]: libpod-conmon-caaeed2e1c214e0d80e0d10671aa9f1df33c7705180081eb8f8ee20d275b804e.scope: Deactivated successfully.
Oct 01 13:10:42 compute-0 python3[97806]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:42 compute-0 podman[97817]: 2025-10-01 13:10:42.075671884 +0000 UTC m=+0.032375658 container create 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:10:42 compute-0 systemd[1]: Started libpod-conmon-8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2.scope.
Oct 01 13:10:42 compute-0 podman[97828]: 2025-10-01 13:10:42.107465323 +0000 UTC m=+0.039503935 container create a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:42 compute-0 systemd[1]: Started libpod-conmon-a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0.scope.
Oct 01 13:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:42 compute-0 podman[97817]: 2025-10-01 13:10:42.150526384 +0000 UTC m=+0.107230178 container init 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b11b47d940ab1871bd6a3ca96d1e6dba052999085d45d038fbd1c4537dd7a8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b11b47d940ab1871bd6a3ca96d1e6dba052999085d45d038fbd1c4537dd7a8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b11b47d940ab1871bd6a3ca96d1e6dba052999085d45d038fbd1c4537dd7a8d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:42 compute-0 podman[97817]: 2025-10-01 13:10:42.062333037 +0000 UTC m=+0.019036841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:42 compute-0 podman[97828]: 2025-10-01 13:10:42.164536781 +0000 UTC m=+0.096575413 container init a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:10:42 compute-0 podman[97817]: 2025-10-01 13:10:42.166025837 +0000 UTC m=+0.122729621 container start 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:10:42 compute-0 podman[97817]: 2025-10-01 13:10:42.168943686 +0000 UTC m=+0.125647480 container attach 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:42 compute-0 podman[97828]: 2025-10-01 13:10:42.172259037 +0000 UTC m=+0.104297649 container start a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:42 compute-0 podman[97828]: 2025-10-01 13:10:42.174946849 +0000 UTC m=+0.106985461 container attach a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 13:10:42 compute-0 podman[97828]: 2025-10-01 13:10:42.090143195 +0000 UTC m=+0.022181847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:42 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:10:42 compute-0 ceph-mgr[75103]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 01 13:10:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct 01 13:10:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 01 13:10:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct 01 13:10:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 01 13:10:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct 01 13:10:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 01 13:10:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct 01 13:10:42 compute-0 ceph-mon[74802]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 01 13:10:42 compute-0 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 01 13:10:42 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0[74798]: 2025-10-01T13:10:42.679+0000 7fa515793640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 01 13:10:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 01 13:10:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e2 new map
Oct 01 13:10:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-01T13:10:42.681473+0000
                                           modified        2025-10-01T13:10:42.681508+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Oct 01 13:10:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct 01 13:10:42 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct 01 13:10:42 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct 01 13:10:42 compute-0 ceph-mgr[75103]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 01 13:10:42 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 01 13:10:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 01 13:10:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:42 compute-0 ceph-mgr[75103]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct 01 13:10:42 compute-0 systemd[1]: libpod-a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0.scope: Deactivated successfully.
Oct 01 13:10:42 compute-0 podman[97828]: 2025-10-01 13:10:42.717262234 +0000 UTC m=+0.649300846 container died a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:10:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b11b47d940ab1871bd6a3ca96d1e6dba052999085d45d038fbd1c4537dd7a8d-merged.mount: Deactivated successfully.
Oct 01 13:10:42 compute-0 podman[97828]: 2025-10-01 13:10:42.762396489 +0000 UTC m=+0.694435141 container remove a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0 (image=quay.io/ceph/ceph:v18, name=sad_yalow, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:42 compute-0 systemd[1]: libpod-conmon-a88ac38f2f4a43eddcfbeaf2a1b5c869a2e1782ad195758dda608d51edeb14f0.scope: Deactivated successfully.
Oct 01 13:10:42 compute-0 sudo[97798]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:42 compute-0 sweet_galileo[97844]: {
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:     "0": [
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:         {
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "devices": [
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "/dev/loop3"
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             ],
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_name": "ceph_lv0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_size": "21470642176",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "name": "ceph_lv0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "tags": {
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.crush_device_class": "",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.encrypted": "0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.osd_id": "0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.type": "block",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.vdo": "0"
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             },
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "type": "block",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "vg_name": "ceph_vg0"
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:         }
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:     ],
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:     "1": [
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:         {
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "devices": [
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "/dev/loop4"
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             ],
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_name": "ceph_lv1",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_size": "21470642176",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "name": "ceph_lv1",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "tags": {
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.crush_device_class": "",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.encrypted": "0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.osd_id": "1",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.type": "block",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.vdo": "0"
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             },
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "type": "block",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "vg_name": "ceph_vg1"
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:         }
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:     ],
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:     "2": [
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:         {
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "devices": [
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "/dev/loop5"
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             ],
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_name": "ceph_lv2",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_size": "21470642176",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "name": "ceph_lv2",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "tags": {
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.crush_device_class": "",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.encrypted": "0",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.osd_id": "2",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.type": "block",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:                 "ceph.vdo": "0"
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             },
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "type": "block",
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:             "vg_name": "ceph_vg2"
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:         }
Oct 01 13:10:42 compute-0 sweet_galileo[97844]:     ]
Oct 01 13:10:42 compute-0 sweet_galileo[97844]: }
Oct 01 13:10:43 compute-0 sudo[97920]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsspjmfsavvradgfvbnjytxwaujgsdiq ; /usr/bin/python3'
Oct 01 13:10:43 compute-0 sudo[97920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:43 compute-0 ceph-mon[74802]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 01 13:10:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 01 13:10:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 01 13:10:43 compute-0 ceph-mon[74802]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 01 13:10:43 compute-0 ceph-mon[74802]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 01 13:10:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 01 13:10:43 compute-0 ceph-mon[74802]: osdmap e30: 3 total, 3 up, 3 in
Oct 01 13:10:43 compute-0 ceph-mon[74802]: fsmap cephfs:0
Oct 01 13:10:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:43 compute-0 systemd[1]: libpod-8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2.scope: Deactivated successfully.
Oct 01 13:10:43 compute-0 podman[97817]: 2025-10-01 13:10:43.027561739 +0000 UTC m=+0.984265553 container died 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-238f53ca472bf26fb443993cae452bb87ffba259e83697b2241a75148dfa353e-merged.mount: Deactivated successfully.
Oct 01 13:10:43 compute-0 podman[97817]: 2025-10-01 13:10:43.100515402 +0000 UTC m=+1.057219186 container remove 8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galileo, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:43 compute-0 systemd[1]: libpod-conmon-8cd38c93aa8c11551e2113fba56001672306b6284421ed3a527bd5780e897ea2.scope: Deactivated successfully.
Oct 01 13:10:43 compute-0 sudo[97617]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:43 compute-0 python3[97922]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:43 compute-0 sudo[97935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:43 compute-0 sudo[97935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:43 compute-0 sudo[97935]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:43 compute-0 podman[97941]: 2025-10-01 13:10:43.222363284 +0000 UTC m=+0.052529591 container create 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 13:10:43 compute-0 systemd[1]: Started libpod-conmon-02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8.scope.
Oct 01 13:10:43 compute-0 sudo[97973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:43 compute-0 podman[97941]: 2025-10-01 13:10:43.200702685 +0000 UTC m=+0.030868982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:43 compute-0 sudo[97973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179afc2c5d4413f65be465d0d31e03945e8d06591c898347d0f8ba28ac1f35eb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179afc2c5d4413f65be465d0d31e03945e8d06591c898347d0f8ba28ac1f35eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/179afc2c5d4413f65be465d0d31e03945e8d06591c898347d0f8ba28ac1f35eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:43 compute-0 sudo[97973]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:43 compute-0 podman[97941]: 2025-10-01 13:10:43.318315438 +0000 UTC m=+0.148481715 container init 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 13:10:43 compute-0 podman[97941]: 2025-10-01 13:10:43.326923821 +0000 UTC m=+0.157090078 container start 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 13:10:43 compute-0 podman[97941]: 2025-10-01 13:10:43.33017084 +0000 UTC m=+0.160337117 container attach 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:10:43 compute-0 sudo[98003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:43 compute-0 sudo[98003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:43 compute-0 sudo[98003]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:43 compute-0 sudo[98029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:10:43 compute-0 sudo[98029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:43 compute-0 podman[98110]: 2025-10-01 13:10:43.795938363 +0000 UTC m=+0.062263979 container create 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:10:43 compute-0 systemd[1]: Started libpod-conmon-45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c.scope.
Oct 01 13:10:43 compute-0 podman[98110]: 2025-10-01 13:10:43.765152895 +0000 UTC m=+0.031478621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:43 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:10:43 compute-0 ceph-mgr[75103]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct 01 13:10:43 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct 01 13:10:43 compute-0 podman[98110]: 2025-10-01 13:10:43.886228373 +0000 UTC m=+0.152553999 container init 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 01 13:10:43 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:43 compute-0 podman[98110]: 2025-10-01 13:10:43.896537108 +0000 UTC m=+0.162862714 container start 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:43 compute-0 sad_bhaskara[97998]: Scheduled mds.cephfs update...
Oct 01 13:10:43 compute-0 podman[98110]: 2025-10-01 13:10:43.90022388 +0000 UTC m=+0.166549506 container attach 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:10:43 compute-0 relaxed_haslett[98127]: 167 167
Oct 01 13:10:43 compute-0 systemd[1]: libpod-45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c.scope: Deactivated successfully.
Oct 01 13:10:43 compute-0 podman[98110]: 2025-10-01 13:10:43.90382232 +0000 UTC m=+0.170147956 container died 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:10:43 compute-0 systemd[1]: libpod-02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8.scope: Deactivated successfully.
Oct 01 13:10:43 compute-0 podman[97941]: 2025-10-01 13:10:43.913360991 +0000 UTC m=+0.743527248 container died 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-084ab18a98a0a931192117a82e299d17e845a3e5ba818280d3a8e536d8db2ff3-merged.mount: Deactivated successfully.
Oct 01 13:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-179afc2c5d4413f65be465d0d31e03945e8d06591c898347d0f8ba28ac1f35eb-merged.mount: Deactivated successfully.
Oct 01 13:10:43 compute-0 podman[98110]: 2025-10-01 13:10:43.964853789 +0000 UTC m=+0.231179435 container remove 45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:43 compute-0 podman[97941]: 2025-10-01 13:10:43.976115622 +0000 UTC m=+0.806281879 container remove 02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8 (image=quay.io/ceph/ceph:v18, name=sad_bhaskara, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:10:43 compute-0 systemd[1]: libpod-conmon-02cacdaf7ec51fb07f9a8c214104e4747458b5153e294863fd05f6c36efd2ff8.scope: Deactivated successfully.
Oct 01 13:10:43 compute-0 systemd[1]: libpod-conmon-45bcf45aecd381122338d62831de271c81d559a0a4778b22bfe776cfc9a7c36c.scope: Deactivated successfully.
Oct 01 13:10:44 compute-0 sudo[97920]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:44 compute-0 ceph-mon[74802]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:10:44 compute-0 ceph-mon[74802]: Saving service mds.cephfs spec with placement compute-0
Oct 01 13:10:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:44 compute-0 podman[98166]: 2025-10-01 13:10:44.201903603 +0000 UTC m=+0.061229477 container create f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:10:44 compute-0 systemd[1]: Started libpod-conmon-f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f.scope.
Oct 01 13:10:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:44 compute-0 podman[98166]: 2025-10-01 13:10:44.183514512 +0000 UTC m=+0.042840356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:44 compute-0 podman[98166]: 2025-10-01 13:10:44.300327461 +0000 UTC m=+0.159653365 container init f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:44 compute-0 podman[98166]: 2025-10-01 13:10:44.313465852 +0000 UTC m=+0.172791726 container start f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:10:44 compute-0 podman[98166]: 2025-10-01 13:10:44.318137585 +0000 UTC m=+0.177463479 container attach f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:10:44 compute-0 sudo[98262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iszzxemyimwtviuqlmaptodyjnpndzmq ; /usr/bin/python3'
Oct 01 13:10:44 compute-0 sudo[98262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:44 compute-0 python3[98264]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 01 13:10:44 compute-0 sudo[98262]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:45 compute-0 ceph-mon[74802]: pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:45 compute-0 ceph-mon[74802]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 13:10:45 compute-0 ceph-mon[74802]: Saving service mds.cephfs spec with placement compute-0
Oct 01 13:10:45 compute-0 sudo[98349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwkueotljwrflucshxbtaysqfgbrzkna ; /usr/bin/python3'
Oct 01 13:10:45 compute-0 sudo[98349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:45 compute-0 hungry_swirles[98182]: {
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "osd_id": 0,
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "type": "bluestore"
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:     },
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "osd_id": 2,
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "type": "bluestore"
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:     },
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "osd_id": 1,
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:         "type": "bluestore"
Oct 01 13:10:45 compute-0 hungry_swirles[98182]:     }
Oct 01 13:10:45 compute-0 hungry_swirles[98182]: }
Oct 01 13:10:45 compute-0 python3[98353]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324244.4685662-33946-161045783604869/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=cb7a726d0a2db4bead6fc30d6d9fab3edee0b4fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:10:45 compute-0 sudo[98349]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:45 compute-0 systemd[1]: libpod-f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f.scope: Deactivated successfully.
Oct 01 13:10:45 compute-0 systemd[1]: libpod-f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f.scope: Consumed 1.026s CPU time.
Oct 01 13:10:45 compute-0 podman[98366]: 2025-10-01 13:10:45.375973338 +0000 UTC m=+0.029320534 container died f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6dd2f8c21fdacf40850fb5f2f26e2107a3a820089d6c1579afefdd931058cc8-merged.mount: Deactivated successfully.
Oct 01 13:10:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:45 compute-0 podman[98366]: 2025-10-01 13:10:45.438150203 +0000 UTC m=+0.091497299 container remove f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:10:45 compute-0 systemd[1]: libpod-conmon-f468f3d76725193778b495115855a71a89857ec657fbebc334c7ef3cfb7a629f.scope: Deactivated successfully.
Oct 01 13:10:45 compute-0 sudo[98029]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:45 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:45 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:45 compute-0 sudo[98403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:45 compute-0 sudo[98403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:45 compute-0 sudo[98403]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:45 compute-0 sudo[98428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:10:45 compute-0 sudo[98428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:45 compute-0 sudo[98428]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:45 compute-0 sudo[98493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifqypvyghpxxudxizaiwkwusbjftbnay ; /usr/bin/python3'
Oct 01 13:10:45 compute-0 sudo[98493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:45 compute-0 sudo[98465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:45 compute-0 sudo[98465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:45 compute-0 sudo[98465]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:45 compute-0 sudo[98504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:45 compute-0 sudo[98504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:45 compute-0 sudo[98504]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:45 compute-0 python3[98501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:45 compute-0 sudo[98529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:45 compute-0 sudo[98529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:45 compute-0 sudo[98529]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:45 compute-0 podman[98537]: 2025-10-01 13:10:45.945291956 +0000 UTC m=+0.059881336 container create 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:45 compute-0 systemd[1]: Started libpod-conmon-7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234.scope.
Oct 01 13:10:46 compute-0 podman[98537]: 2025-10-01 13:10:45.922198492 +0000 UTC m=+0.036787842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:46 compute-0 sudo[98567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d632ac7034bc66aa6d0b6606b2ecd0f10f77fe1701ee320328c75959fc4d6a0e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d632ac7034bc66aa6d0b6606b2ecd0f10f77fe1701ee320328c75959fc4d6a0e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:46 compute-0 sudo[98567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:46 compute-0 podman[98537]: 2025-10-01 13:10:46.05013196 +0000 UTC m=+0.164721390 container init 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:10:46 compute-0 podman[98537]: 2025-10-01 13:10:46.063308033 +0000 UTC m=+0.177897413 container start 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:46 compute-0 podman[98537]: 2025-10-01 13:10:46.068178991 +0000 UTC m=+0.182768451 container attach 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:46 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:46 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:46 compute-0 podman[98689]: 2025-10-01 13:10:46.653840437 +0000 UTC m=+0.079224075 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:10:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct 01 13:10:46 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3408659514' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 01 13:10:46 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3408659514' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 01 13:10:46 compute-0 systemd[1]: libpod-7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234.scope: Deactivated successfully.
Oct 01 13:10:46 compute-0 podman[98537]: 2025-10-01 13:10:46.678514239 +0000 UTC m=+0.793103619 container died 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 01 13:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d632ac7034bc66aa6d0b6606b2ecd0f10f77fe1701ee320328c75959fc4d6a0e-merged.mount: Deactivated successfully.
Oct 01 13:10:46 compute-0 podman[98537]: 2025-10-01 13:10:46.737101214 +0000 UTC m=+0.851690564 container remove 7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234 (image=quay.io/ceph/ceph:v18, name=adoring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 13:10:46 compute-0 systemd[1]: libpod-conmon-7445fd6b73f03cc1be5df55d54f55bca6864955b694575ab7b78a51a7ed1c234.scope: Deactivated successfully.
Oct 01 13:10:46 compute-0 sudo[98493]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:46 compute-0 podman[98689]: 2025-10-01 13:10:46.790077628 +0000 UTC m=+0.215461266 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 13:10:47 compute-0 sudo[98841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xclhtpgtovkwsgcaxxhspkeagieqjoti ; /usr/bin/python3'
Oct 01 13:10:47 compute-0 sudo[98841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:47 compute-0 sudo[98567]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:10:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:10:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev bc74f53b-49f5-4c31-81bc-9acfbd9306bd does not exist
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev eeff341c-919b-4639-ae9b-8b83bf6db126 does not exist
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a61f3cc-7877-4a8c-b8f7-25702c0380a2 does not exist
Oct 01 13:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:10:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:10:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:47 compute-0 python3[98849]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:47 compute-0 sudo[98850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:47 compute-0 sudo[98850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:47 compute-0 sudo[98850]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:47 compute-0 ceph-mon[74802]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3408659514' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3408659514' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:47 compute-0 podman[98876]: 2025-10-01 13:10:47.584923768 +0000 UTC m=+0.059430772 container create e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct 01 13:10:47 compute-0 sudo[98878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:47 compute-0 sudo[98878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:47 compute-0 sudo[98878]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:47 compute-0 systemd[1]: Started libpod-conmon-e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437.scope.
Oct 01 13:10:47 compute-0 podman[98876]: 2025-10-01 13:10:47.565908149 +0000 UTC m=+0.040415173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0601b66e012301850648ec50d68764f5684f59cc926b12338750d7d00c6020/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0601b66e012301850648ec50d68764f5684f59cc926b12338750d7d00c6020/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:47 compute-0 sudo[98918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:47 compute-0 sudo[98918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:47 compute-0 sudo[98918]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:47 compute-0 podman[98876]: 2025-10-01 13:10:47.699146828 +0000 UTC m=+0.173653912 container init e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:10:47 compute-0 podman[98876]: 2025-10-01 13:10:47.709536776 +0000 UTC m=+0.184043810 container start e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:10:47 compute-0 podman[98876]: 2025-10-01 13:10:47.713470485 +0000 UTC m=+0.187977519 container attach e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:10:47
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta']
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 13:10:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:10:47 compute-0 sudo[98947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:10:47 compute-0 sudo[98947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:10:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:10:48 compute-0 podman[99032]: 2025-10-01 13:10:48.187467348 +0000 UTC m=+0.051778119 container create 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:48 compute-0 systemd[1]: Started libpod-conmon-48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752.scope.
Oct 01 13:10:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:48 compute-0 podman[99032]: 2025-10-01 13:10:48.170300735 +0000 UTC m=+0.034611526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:48 compute-0 podman[99032]: 2025-10-01 13:10:48.269473568 +0000 UTC m=+0.133784429 container init 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:10:48 compute-0 podman[99032]: 2025-10-01 13:10:48.275041537 +0000 UTC m=+0.139352308 container start 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:10:48 compute-0 quizzical_kapitsa[99048]: 167 167
Oct 01 13:10:48 compute-0 systemd[1]: libpod-48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752.scope: Deactivated successfully.
Oct 01 13:10:48 compute-0 podman[99032]: 2025-10-01 13:10:48.280537475 +0000 UTC m=+0.144848276 container attach 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:10:48 compute-0 podman[99032]: 2025-10-01 13:10:48.280783362 +0000 UTC m=+0.145094153 container died 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c0f76f56f0670c64048d8d2975703eb685f42ed1ec26b94ab78cda00fec1563-merged.mount: Deactivated successfully.
Oct 01 13:10:48 compute-0 podman[99032]: 2025-10-01 13:10:48.31712329 +0000 UTC m=+0.181434071 container remove 48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:10:48 compute-0 systemd[1]: libpod-conmon-48d02c3ed2e58485735f6c41225bc088374cce3ca9a1e242fa05cb83d5e1a752.scope: Deactivated successfully.
Oct 01 13:10:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 01 13:10:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821532168' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 13:10:48 compute-0 amazing_sanderson[98925]: 
Oct 01 13:10:48 compute-0 amazing_sanderson[98925]: {"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":167,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":30,"num_osds":3,"num_up_osds":3,"osd_up_since":1759324211,"num_in_osds":3,"osd_in_since":1759324184,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83767296,"bytes_avail":64328159232,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T13:09:49.717098+0000","services":{}},"progress_events":{}}
Oct 01 13:10:48 compute-0 systemd[1]: libpod-e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437.scope: Deactivated successfully.
Oct 01 13:10:48 compute-0 conmon[98925]: conmon e2929352878e8c4afcea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437.scope/container/memory.events
Oct 01 13:10:48 compute-0 podman[98876]: 2025-10-01 13:10:48.354842548 +0000 UTC m=+0.829349552 container died e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-af0601b66e012301850648ec50d68764f5684f59cc926b12338750d7d00c6020-merged.mount: Deactivated successfully.
Oct 01 13:10:48 compute-0 podman[98876]: 2025-10-01 13:10:48.406328218 +0000 UTC m=+0.880835252 container remove e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437 (image=quay.io/ceph/ceph:v18, name=amazing_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:48 compute-0 systemd[1]: libpod-conmon-e2929352878e8c4afceab79971b04be9f92e7b55b7743b9fc948b4dc5f5d5437.scope: Deactivated successfully.
Oct 01 13:10:48 compute-0 sudo[98841]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:48 compute-0 podman[99087]: 2025-10-01 13:10:48.484820459 +0000 UTC m=+0.052191832 container create 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:10:48 compute-0 systemd[1]: Started libpod-conmon-2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e.scope.
Oct 01 13:10:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct 01 13:10:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:48 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/821532168' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 13:10:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct 01 13:10:48 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct 01 13:10:48 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev c7a299eb-1fe2-40d1-b8f9-439c2ff29ac3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 01 13:10:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 13:10:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:48 compute-0 podman[99087]: 2025-10-01 13:10:48.460998143 +0000 UTC m=+0.028369586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:48 compute-0 podman[99087]: 2025-10-01 13:10:48.584488456 +0000 UTC m=+0.151859809 container init 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 01 13:10:48 compute-0 podman[99087]: 2025-10-01 13:10:48.600780702 +0000 UTC m=+0.168152055 container start 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 01 13:10:48 compute-0 podman[99087]: 2025-10-01 13:10:48.604194857 +0000 UTC m=+0.171566210 container attach 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:10:48 compute-0 sudo[99132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjnggmduwuidbofkfcaiziavhklfvjuk ; /usr/bin/python3'
Oct 01 13:10:48 compute-0 sudo[99132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:48 compute-0 python3[99134]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:48 compute-0 podman[99135]: 2025-10-01 13:10:48.869268933 +0000 UTC m=+0.050955842 container create 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:10:48 compute-0 systemd[1]: Started libpod-conmon-18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8.scope.
Oct 01 13:10:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098daa066444bfd697b7b7c3e457bf85516ded40b683eddeb282d90f4bbcd9bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098daa066444bfd697b7b7c3e457bf85516ded40b683eddeb282d90f4bbcd9bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:48 compute-0 podman[99135]: 2025-10-01 13:10:48.852583475 +0000 UTC m=+0.034270404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:48 compute-0 podman[99135]: 2025-10-01 13:10:48.963856256 +0000 UTC m=+0.145543185 container init 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:48 compute-0 podman[99135]: 2025-10-01 13:10:48.976172631 +0000 UTC m=+0.157859570 container start 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:48 compute-0 podman[99135]: 2025-10-01 13:10:48.98004795 +0000 UTC m=+0.161734859 container attach 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct 01 13:10:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct 01 13:10:49 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct 01 13:10:49 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev c537a440-1190-425e-99dc-5e76a685055c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 01 13:10:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 13:10:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:49 compute-0 ceph-mon[74802]: pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:49 compute-0 ceph-mon[74802]: osdmap e31: 3 total, 3 up, 3 in
Oct 01 13:10:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 01 13:10:49 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1856850079' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 13:10:49 compute-0 vibrant_lamport[99152]: 
Oct 01 13:10:49 compute-0 vibrant_lamport[99152]: {"epoch":1,"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","modified":"2025-10-01T13:07:55.363588Z","created":"2025-10-01T13:07:55.363588Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct 01 13:10:49 compute-0 vibrant_lamport[99152]: dumped monmap epoch 1
Oct 01 13:10:49 compute-0 sshd-session[99156]: Invalid user ahsan from 156.236.31.46 port 43832
Oct 01 13:10:49 compute-0 systemd[1]: libpod-18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8.scope: Deactivated successfully.
Oct 01 13:10:49 compute-0 mystifying_fermat[99104]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:10:49 compute-0 mystifying_fermat[99104]: --> relative data size: 1.0
Oct 01 13:10:49 compute-0 mystifying_fermat[99104]: --> All data devices are unavailable
Oct 01 13:10:49 compute-0 podman[99202]: 2025-10-01 13:10:49.670666483 +0000 UTC m=+0.029579291 container died 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:49 compute-0 systemd[1]: libpod-2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e.scope: Deactivated successfully.
Oct 01 13:10:49 compute-0 systemd[1]: libpod-2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e.scope: Consumed 1.024s CPU time.
Oct 01 13:10:49 compute-0 podman[99087]: 2025-10-01 13:10:49.680522574 +0000 UTC m=+1.247893937 container died 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-098daa066444bfd697b7b7c3e457bf85516ded40b683eddeb282d90f4bbcd9bd-merged.mount: Deactivated successfully.
Oct 01 13:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-19754d64e29e2ad0ef7a21af6c2688c4f827a785630169c12da323043e8d89e0-merged.mount: Deactivated successfully.
Oct 01 13:10:49 compute-0 sshd-session[99156]: Received disconnect from 156.236.31.46 port 43832:11: Bye Bye [preauth]
Oct 01 13:10:49 compute-0 sshd-session[99156]: Disconnected from invalid user ahsan 156.236.31.46 port 43832 [preauth]
Oct 01 13:10:49 compute-0 podman[99202]: 2025-10-01 13:10:49.731390353 +0000 UTC m=+0.090303141 container remove 18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8 (image=quay.io/ceph/ceph:v18, name=vibrant_lamport, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:10:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 13:10:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 13:10:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:49 compute-0 systemd[1]: libpod-conmon-18395097f0f1a80492eafc9c578608beebc7822a777b82c454f7199933efc0f8.scope: Deactivated successfully.
Oct 01 13:10:49 compute-0 podman[99087]: 2025-10-01 13:10:49.740542633 +0000 UTC m=+1.307913996 container remove 2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 01 13:10:49 compute-0 systemd[1]: libpod-conmon-2561b1ce4aa1818f1d74e3ed27a13e23b38f6db4143d527024287cbf5f99512e.scope: Deactivated successfully.
Oct 01 13:10:49 compute-0 sudo[99132]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:49 compute-0 sudo[98947]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:49 compute-0 sudo[99230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:49 compute-0 sudo[99230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:49 compute-0 sudo[99230]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:49 compute-0 sudo[99255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:49 compute-0 sudo[99255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:49 compute-0 sudo[99255]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:49 compute-0 sudo[99280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:49 compute-0 sudo[99280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:49 compute-0 sudo[99280]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:50 compute-0 sudo[99305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:10:50 compute-0 sudo[99305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:50 compute-0 sudo[99371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poxcxpjwpqteaktutlprnxqgrzxvjqdz ; /usr/bin/python3'
Oct 01 13:10:50 compute-0 sudo[99371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:50 compute-0 podman[99395]: 2025-10-01 13:10:50.321907187 +0000 UTC m=+0.042966689 container create 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:10:50 compute-0 python3[99378]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:50 compute-0 systemd[1]: Started libpod-conmon-11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820.scope.
Oct 01 13:10:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:50 compute-0 podman[99395]: 2025-10-01 13:10:50.389157386 +0000 UTC m=+0.110216958 container init 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct 01 13:10:50 compute-0 podman[99395]: 2025-10-01 13:10:50.302162937 +0000 UTC m=+0.023222499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:50 compute-0 podman[99395]: 2025-10-01 13:10:50.398681137 +0000 UTC m=+0.119740619 container start 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:50 compute-0 podman[99395]: 2025-10-01 13:10:50.403760192 +0000 UTC m=+0.124819694 container attach 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:10:50 compute-0 recursing_archimedes[99413]: 167 167
Oct 01 13:10:50 compute-0 systemd[1]: libpod-11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820.scope: Deactivated successfully.
Oct 01 13:10:50 compute-0 podman[99395]: 2025-10-01 13:10:50.405444764 +0000 UTC m=+0.126504266 container died 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-203512a7299b301e9e2b842d50b8ab95f0ee89b0dbe0766ad27e00d025c27a8b-merged.mount: Deactivated successfully.
Oct 01 13:10:50 compute-0 podman[99412]: 2025-10-01 13:10:50.43389714 +0000 UTC m=+0.068509948 container create 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:50 compute-0 podman[99395]: 2025-10-01 13:10:50.449096303 +0000 UTC m=+0.170155825 container remove 11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:50 compute-0 systemd[1]: libpod-conmon-11d7495a197115e0d86d573a044c6191d0521d7f901ac5b6c87d1a7cfe542820.scope: Deactivated successfully.
Oct 01 13:10:50 compute-0 systemd[1]: Started libpod-conmon-6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a.scope.
Oct 01 13:10:50 compute-0 podman[99412]: 2025-10-01 13:10:50.395230882 +0000 UTC m=+0.029843700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07e7015175848c67fbe936c57b50a1ad40b556f855f894c34ec18829b6f8cf9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07e7015175848c67fbe936c57b50a1ad40b556f855f894c34ec18829b6f8cf9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:50 compute-0 podman[99412]: 2025-10-01 13:10:50.51102414 +0000 UTC m=+0.145636968 container init 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:10:50 compute-0 podman[99412]: 2025-10-01 13:10:50.517381834 +0000 UTC m=+0.151994642 container start 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 13:10:50 compute-0 podman[99412]: 2025-10-01 13:10:50.52052708 +0000 UTC m=+0.155139888 container attach 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct 01 13:10:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct 01 13:10:50 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct 01 13:10:50 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev 1a08c9f5-e1a5-4905-b8dc-113644a0448d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 01 13:10:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 13:10:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:50 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=13/13 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=10.087410927s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active pruub 54.869903564s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:10:50 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=13/13 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=10.087410927s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown pruub 54.869903564s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:50 compute-0 ceph-mon[74802]: osdmap e32: 3 total, 3 up, 3 in
Oct 01 13:10:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:50 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1856850079' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 01 13:10:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:50 compute-0 ceph-mon[74802]: osdmap e33: 3 total, 3 up, 3 in
Oct 01 13:10:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:50 compute-0 podman[99454]: 2025-10-01 13:10:50.621075734 +0000 UTC m=+0.053801080 container create 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 13:10:50 compute-0 systemd[1]: Started libpod-conmon-3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4.scope.
Oct 01 13:10:50 compute-0 podman[99454]: 2025-10-01 13:10:50.596017711 +0000 UTC m=+0.028743127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:50 compute-0 podman[99454]: 2025-10-01 13:10:50.736007006 +0000 UTC m=+0.168732372 container init 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:50 compute-0 podman[99454]: 2025-10-01 13:10:50.744236657 +0000 UTC m=+0.176961993 container start 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:50 compute-0 podman[99454]: 2025-10-01 13:10:50.751685394 +0000 UTC m=+0.184410740 container attach 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:10:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct 01 13:10:51 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2888623065' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 01 13:10:51 compute-0 admiring_haibt[99445]: [client.openstack]
Oct 01 13:10:51 compute-0 admiring_haibt[99445]:         key = AQCSJ91oAAAAABAAnrq6Xzc1a2WsnMS+ZR1nnw==
Oct 01 13:10:51 compute-0 admiring_haibt[99445]:         caps mgr = "allow *"
Oct 01 13:10:51 compute-0 admiring_haibt[99445]:         caps mon = "profile rbd"
Oct 01 13:10:51 compute-0 admiring_haibt[99445]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct 01 13:10:51 compute-0 systemd[1]: libpod-6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a.scope: Deactivated successfully.
Oct 01 13:10:51 compute-0 podman[99412]: 2025-10-01 13:10:51.093641804 +0000 UTC m=+0.728254652 container died 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a07e7015175848c67fbe936c57b50a1ad40b556f855f894c34ec18829b6f8cf9-merged.mount: Deactivated successfully.
Oct 01 13:10:51 compute-0 podman[99412]: 2025-10-01 13:10:51.154992403 +0000 UTC m=+0.789605241 container remove 6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a (image=quay.io/ceph/ceph:v18, name=admiring_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:10:51 compute-0 systemd[1]: libpod-conmon-6ab382f24ad0e3a4de7fbbe66e549b63386e39db22dd0711b3f4f0687c875b2a.scope: Deactivated successfully.
Oct 01 13:10:51 compute-0 sudo[99371]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]: {
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:     "0": [
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:         {
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "devices": [
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "/dev/loop3"
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             ],
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_name": "ceph_lv0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_size": "21470642176",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "name": "ceph_lv0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "tags": {
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.crush_device_class": "",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.encrypted": "0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.osd_id": "0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.type": "block",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.vdo": "0"
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             },
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "type": "block",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "vg_name": "ceph_vg0"
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:         }
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:     ],
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:     "1": [
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:         {
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "devices": [
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "/dev/loop4"
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             ],
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_name": "ceph_lv1",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_size": "21470642176",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "name": "ceph_lv1",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "tags": {
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.crush_device_class": "",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.encrypted": "0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.osd_id": "1",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.type": "block",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.vdo": "0"
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             },
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "type": "block",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "vg_name": "ceph_vg1"
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:         }
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:     ],
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:     "2": [
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:         {
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "devices": [
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "/dev/loop5"
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             ],
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_name": "ceph_lv2",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_size": "21470642176",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "name": "ceph_lv2",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "tags": {
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.cluster_name": "ceph",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.crush_device_class": "",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.encrypted": "0",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.osd_id": "2",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.type": "block",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:                 "ceph.vdo": "0"
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             },
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "type": "block",
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:             "vg_name": "ceph_vg2"
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:         }
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]:     ]
Oct 01 13:10:51 compute-0 affectionate_varahamihira[99471]: }
Oct 01 13:10:51 compute-0 systemd[1]: libpod-3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4.scope: Deactivated successfully.
Oct 01 13:10:51 compute-0 podman[99454]: 2025-10-01 13:10:51.521082988 +0000 UTC m=+0.953808424 container died 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-13c5e4fb695eb8422e80884a1177a8bdb783f843dece4d9530373f94eefd82b7-merged.mount: Deactivated successfully.
Oct 01 13:10:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct 01 13:10:51 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct 01 13:10:51 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct 01 13:10:51 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev f1d7fae5-9ea8-4012-b34f-a26114a1e0b5 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 01 13:10:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct 01 13:10:51 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=17/18 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=13/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 podman[99454]: 2025-10-01 13:10:51.60119233 +0000 UTC m=+1.033917706 container remove 3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:10:51 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:51 compute-0 systemd[1]: libpod-conmon-3353adedc7959a311ef50ae2379512f2ef6a4824238a52742a32cf032d9cd5b4.scope: Deactivated successfully.
Oct 01 13:10:51 compute-0 ceph-mon[74802]: pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:51 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2888623065' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 01 13:10:51 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:51 compute-0 ceph-mon[74802]: osdmap e34: 3 total, 3 up, 3 in
Oct 01 13:10:51 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 01 13:10:51 compute-0 sudo[99305]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 13:10:51 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 13:10:51 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:51 compute-0 sudo[99526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:51 compute-0 sudo[99526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:51 compute-0 sudo[99526]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:51 compute-0 sudo[99551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:51 compute-0 sudo[99551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:51 compute-0 sudo[99551]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:51 compute-0 sudo[99576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:51 compute-0 sudo[99576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:51 compute-0 sudo[99576]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:51 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 01 13:10:51 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 01 13:10:52 compute-0 sudo[99601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:10:52 compute-0 sudo[99601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=33 pruub=14.442940712s) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active pruub 65.777542114s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=33 pruub=14.442940712s) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown pruub 65.777542114s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=15/16 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 podman[99715]: 2025-10-01 13:10:52.476836702 +0000 UTC m=+0.065560260 container create 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 13:10:52 compute-0 systemd[1]: Started libpod-conmon-5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b.scope.
Oct 01 13:10:52 compute-0 podman[99715]: 2025-10-01 13:10:52.452332704 +0000 UTC m=+0.041056262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:52 compute-0 podman[99715]: 2025-10-01 13:10:52.575272861 +0000 UTC m=+0.163996419 container init 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:10:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct 01 13:10:52 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 01 13:10:52 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:52 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct 01 13:10:52 compute-0 podman[99715]: 2025-10-01 13:10:52.586274536 +0000 UTC m=+0.174998084 container start 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:52 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct 01 13:10:52 compute-0 podman[99715]: 2025-10-01 13:10:52.590050601 +0000 UTC m=+0.178774149 container attach 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:52 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev 7ff654ee-e209-44ce-afe0-0a75c7b339bf (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 01 13:10:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 13:10:52 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:52 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35 pruub=10.103324890s) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active pruub 56.894187927s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:10:52 compute-0 cool_cohen[99757]: 167 167
Oct 01 13:10:52 compute-0 systemd[1]: libpod-5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b.scope: Deactivated successfully.
Oct 01 13:10:52 compute-0 conmon[99757]: conmon 5e711ca65357e43e04b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b.scope/container/memory.events
Oct 01 13:10:52 compute-0 podman[99715]: 2025-10-01 13:10:52.597652723 +0000 UTC m=+0.186376261 container died 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:52 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35 pruub=10.103324890s) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown pruub 56.894187927s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.4( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=33/35 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.2( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.b( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 35 pg[3.19( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=15/15 les/c/f=16/16/0 sis=33) [1] r=0 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:52 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:52 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:52 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 01 13:10:52 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:52 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:52 compute-0 ceph-mon[74802]: osdmap e35: 3 total, 3 up, 3 in
Oct 01 13:10:52 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9498807502cc90d7619902696b3fb4adba902aaf4d1d0954f708e117b80e18c7-merged.mount: Deactivated successfully.
Oct 01 13:10:52 compute-0 podman[99715]: 2025-10-01 13:10:52.667209923 +0000 UTC m=+0.255933491 container remove 5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:10:52 compute-0 systemd[1]: libpod-conmon-5e711ca65357e43e04b0b6c32709772ff5374f15f7b3813e1673761b87d8171b.scope: Deactivated successfully.
Oct 01 13:10:52 compute-0 sudo[99851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bguswjxexcclqxkosnbbqjgwkokbqaem ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759324252.3191338-34018-32461601622966/async_wrapper.py j241018028218 30 /home/zuul/.ansible/tmp/ansible-tmp-1759324252.3191338-34018-32461601622966/AnsiballZ_command.py _'
Oct 01 13:10:52 compute-0 sudo[99851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:52 compute-0 ceph-mgr[75103]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Oct 01 13:10:52 compute-0 podman[99857]: 2025-10-01 13:10:52.844910817 +0000 UTC m=+0.054225993 container create 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:10:52 compute-0 systemd[1]: Started libpod-conmon-1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63.scope.
Oct 01 13:10:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:52 compute-0 podman[99857]: 2025-10-01 13:10:52.819869584 +0000 UTC m=+0.029184790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:52 compute-0 podman[99857]: 2025-10-01 13:10:52.920748558 +0000 UTC m=+0.130063754 container init 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:52 compute-0 podman[99857]: 2025-10-01 13:10:52.930188156 +0000 UTC m=+0.139503312 container start 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:10:52 compute-0 podman[99857]: 2025-10-01 13:10:52.940761548 +0000 UTC m=+0.150076704 container attach 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:52 compute-0 ansible-async_wrapper.py[99859]: Invoked with j241018028218 30 /home/zuul/.ansible/tmp/ansible-tmp-1759324252.3191338-34018-32461601622966/AnsiballZ_command.py _
Oct 01 13:10:52 compute-0 ansible-async_wrapper.py[99882]: Starting module and watcher
Oct 01 13:10:52 compute-0 ansible-async_wrapper.py[99882]: Start watching 99883 (30)
Oct 01 13:10:52 compute-0 ansible-async_wrapper.py[99883]: Start module (99883)
Oct 01 13:10:52 compute-0 ansible-async_wrapper.py[99859]: Return async_wrapper task started.
Oct 01 13:10:52 compute-0 sudo[99851]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:53 compute-0 python3[99884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=15.529828072s) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active pruub 72.803611755s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=15.529828072s) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown pruub 72.803611755s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 podman[99885]: 2025-10-01 13:10:53.137686539 +0000 UTC m=+0.042187027 container create 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:10:53 compute-0 systemd[1]: Started libpod-conmon-09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf.scope.
Oct 01 13:10:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff836bdadaf4942ca8a343538bd4f5c1003061498890f0f5cc1d509404c483bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff836bdadaf4942ca8a343538bd4f5c1003061498890f0f5cc1d509404c483bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:53 compute-0 podman[99885]: 2025-10-01 13:10:53.204092402 +0000 UTC m=+0.108592900 container init 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:10:53 compute-0 podman[99885]: 2025-10-01 13:10:53.209263309 +0000 UTC m=+0.113763807 container start 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:10:53 compute-0 podman[99885]: 2025-10-01 13:10:53.212837828 +0000 UTC m=+0.117338326 container attach 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:53 compute-0 podman[99885]: 2025-10-01 13:10:53.118161414 +0000 UTC m=+0.022661932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:53 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Oct 01 13:10:53 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Oct 01 13:10:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev 6a0a56e0-046b-4d78-8b2c-daaeb707fe2a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev c7a299eb-1fe2-40d1-b8f9-439c2ff29ac3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event c7a299eb-1fe2-40d1-b8f9-439c2ff29ac3 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev c537a440-1190-425e-99dc-5e76a685055c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event c537a440-1190-425e-99dc-5e76a685055c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev 1a08c9f5-e1a5-4905-b8dc-113644a0448d (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event 1a08c9f5-e1a5-4905-b8dc-113644a0448d (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev f1d7fae5-9ea8-4012-b34f-a26114a1e0b5 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event f1d7fae5-9ea8-4012-b34f-a26114a1e0b5 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev 7ff654ee-e209-44ce-afe0-0a75c7b339bf (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event 7ff654ee-e209-44ce-afe0-0a75c7b339bf (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev 6a0a56e0-046b-4d78-8b2c-daaeb707fe2a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event 6a0a56e0-046b-4d78-8b2c-daaeb707fe2a (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.0( empty local-lis/les=35/36 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [2] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=35/36 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:53 compute-0 ceph-mon[74802]: pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:53 compute-0 ceph-mon[74802]: 2.1 scrub starts
Oct 01 13:10:53 compute-0 ceph-mon[74802]: 2.1 scrub ok
Oct 01 13:10:53 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:10:53 compute-0 ceph-mon[74802]: osdmap e36: 3 total, 3 up, 3 in
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 93 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:10:53 compute-0 stoic_agnesi[99901]: 
Oct 01 13:10:53 compute-0 stoic_agnesi[99901]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 01 13:10:53 compute-0 systemd[1]: libpod-09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf.scope: Deactivated successfully.
Oct 01 13:10:53 compute-0 podman[99885]: 2025-10-01 13:10:53.788179559 +0000 UTC m=+0.692680087 container died 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]: {
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "osd_id": 0,
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "type": "bluestore"
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:     },
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "osd_id": 2,
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "type": "bluestore"
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:     },
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "osd_id": 1,
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:         "type": "bluestore"
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]:     }
Oct 01 13:10:53 compute-0 vibrant_ptolemy[99875]: }
Oct 01 13:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff836bdadaf4942ca8a343538bd4f5c1003061498890f0f5cc1d509404c483bc-merged.mount: Deactivated successfully.
Oct 01 13:10:53 compute-0 systemd[76436]: Starting Mark boot as successful...
Oct 01 13:10:53 compute-0 systemd[76436]: Finished Mark boot as successful.
Oct 01 13:10:53 compute-0 podman[99885]: 2025-10-01 13:10:53.835140911 +0000 UTC m=+0.739641409 container remove 09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf (image=quay.io/ceph/ceph:v18, name=stoic_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 13:10:53 compute-0 systemd[1]: libpod-1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63.scope: Deactivated successfully.
Oct 01 13:10:53 compute-0 podman[99857]: 2025-10-01 13:10:53.840434092 +0000 UTC m=+1.049749258 container died 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:10:53 compute-0 ansible-async_wrapper.py[99883]: Module complete (99883)
Oct 01 13:10:53 compute-0 systemd[1]: libpod-conmon-09b545cc209ffc7968a0816fabf774c302f495c092c1c5660eacb05e1917c2bf.scope: Deactivated successfully.
Oct 01 13:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-25d2fe462b8ee2ae7940a809b1efe1fe88bebbc4c606093d37ac10a1d6f01038-merged.mount: Deactivated successfully.
Oct 01 13:10:53 compute-0 podman[99857]: 2025-10-01 13:10:53.914265631 +0000 UTC m=+1.123580807 container remove 1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:10:53 compute-0 systemd[1]: libpod-conmon-1d8d5de87cb846f7167de95b71ceb078712f30cf01f332d8b3093a3c8deccc63.scope: Deactivated successfully.
Oct 01 13:10:53 compute-0 sudo[99601]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev 70c62511-9186-47e2-a676-587875b6c394 (Updating rgw.rgw deployment (+1 -> 1))
Oct 01 13:10:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 01 13:10:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:53 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.rmxmfa on compute-0
Oct 01 13:10:53 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.rmxmfa on compute-0
Oct 01 13:10:54 compute-0 sudo[99977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:54 compute-0 sudo[99977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:54 compute-0 sudo[99977]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:54 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct 01 13:10:54 compute-0 sudo[100025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:54 compute-0 sudo[100025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:54 compute-0 sudo[100071]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmqivivtdpqowbcssenyeqkluvetcxor ; /usr/bin/python3'
Oct 01 13:10:54 compute-0 sudo[100071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:54 compute-0 sudo[100025]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:54 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct 01 13:10:54 compute-0 sudo[100076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:54 compute-0 sudo[100076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:54 compute-0 sudo[100076]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:54 compute-0 sudo[100101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:10:54 compute-0 sudo[100101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:54 compute-0 python3[100075]: ansible-ansible.legacy.async_status Invoked with jid=j241018028218.99859 mode=status _async_dir=/root/.ansible_async
Oct 01 13:10:54 compute-0 sudo[100071]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:54 compute-0 sudo[100186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhbknuwvqjqxayfuqbpjnvfzppvuqjtt ; /usr/bin/python3'
Oct 01 13:10:54 compute-0 sudo[100186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:54 compute-0 python3[100191]: ansible-ansible.legacy.async_status Invoked with jid=j241018028218.99859 mode=cleanup _async_dir=/root/.ansible_async
Oct 01 13:10:54 compute-0 sudo[100186]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct 01 13:10:54 compute-0 ceph-mon[74802]: 3.1 deep-scrub starts
Oct 01 13:10:54 compute-0 ceph-mon[74802]: 3.1 deep-scrub ok
Oct 01 13:10:54 compute-0 ceph-mon[74802]: pgmap v90: 131 pgs: 93 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:10:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 01 13:10:54 compute-0 ceph-mon[74802]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:10:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 01 13:10:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.rmxmfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 01 13:10:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:54 compute-0 ceph-mon[74802]: Deploying daemon rgw.rgw.compute-0.rmxmfa on compute-0
Oct 01 13:10:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 01 13:10:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct 01 13:10:54 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct 01 13:10:54 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=11.088228226s) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active pruub 64.893386841s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:10:54 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37 pruub=11.088228226s) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown pruub 64.893386841s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:54 compute-0 podman[100215]: 2025-10-01 13:10:54.670559817 +0000 UTC m=+0.050851961 container create 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:10:54 compute-0 systemd[1]: Started libpod-conmon-9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e.scope.
Oct 01 13:10:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:54 compute-0 podman[100215]: 2025-10-01 13:10:54.651865898 +0000 UTC m=+0.032158082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:54 compute-0 podman[100215]: 2025-10-01 13:10:54.752405171 +0000 UTC m=+0.132697385 container init 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:10:54 compute-0 podman[100215]: 2025-10-01 13:10:54.75929503 +0000 UTC m=+0.139587174 container start 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:10:54 compute-0 podman[100215]: 2025-10-01 13:10:54.762707025 +0000 UTC m=+0.142999249 container attach 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:10:54 compute-0 bold_chaplygin[100231]: 167 167
Oct 01 13:10:54 compute-0 systemd[1]: libpod-9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e.scope: Deactivated successfully.
Oct 01 13:10:54 compute-0 podman[100215]: 2025-10-01 13:10:54.768928284 +0000 UTC m=+0.149220448 container died 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-aae3bbdc0271a070d1c983cde9df1970d877280ddc8f44892f59ef611e70c2f6-merged.mount: Deactivated successfully.
Oct 01 13:10:54 compute-0 podman[100215]: 2025-10-01 13:10:54.809613854 +0000 UTC m=+0.189905988 container remove 9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:54 compute-0 systemd[1]: libpod-conmon-9c3ad6d3e3a3f1e899b4671545da226035bd8573783cb0753f082a82230ee25e.scope: Deactivated successfully.
Oct 01 13:10:54 compute-0 systemd[1]: Reloading.
Oct 01 13:10:54 compute-0 systemd-rc-local-generator[100277]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:10:54 compute-0 systemd-sysv-generator[100281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:10:55 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 01 13:10:55 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 01 13:10:55 compute-0 systemd[1]: Reloading.
Oct 01 13:10:55 compute-0 systemd-rc-local-generator[100344]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:10:55 compute-0 systemd-sysv-generator[100348]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:10:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:10:55 compute-0 sudo[100315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrozjtagmjkuaktwszirfkyjqseoaydg ; /usr/bin/python3'
Oct 01 13:10:55 compute-0 sudo[100315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:55 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.rmxmfa for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:10:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct 01 13:10:55 compute-0 ceph-mon[74802]: 4.1 scrub starts
Oct 01 13:10:55 compute-0 ceph-mon[74802]: 4.1 scrub ok
Oct 01 13:10:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:10:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 01 13:10:55 compute-0 ceph-mon[74802]: osdmap e37: 3 total, 3 up, 3 in
Oct 01 13:10:55 compute-0 python3[100353]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct 01 13:10:55 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=22/23 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=37/38 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=22/22 les/c/f=23/23/0 sis=37) [1] r=0 lpr=37 pi=[22,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=8.030547142s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 67.867408752s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=8.030547142s) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 67.867408752s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v93: 177 pgs: 139 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:55 compute-0 podman[100380]: 2025-10-01 13:10:55.768752211 +0000 UTC m=+0.077711649 container create d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:10:55 compute-0 systemd[1]: Started libpod-conmon-d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c.scope.
Oct 01 13:10:55 compute-0 podman[100411]: 2025-10-01 13:10:55.833407061 +0000 UTC m=+0.057039710 container create aad65a249f3d9c8d2205ff4de98f33b1f76ef8b51f5bb3dd231b6c5029e0c097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-rgw-rgw-compute-0-rmxmfa, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:55 compute-0 podman[100380]: 2025-10-01 13:10:55.747941546 +0000 UTC m=+0.056900994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83b68196ee0d7e869f8b5f9080931bedeadea67db2e0df39db06dba9b2088d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83b68196ee0d7e869f8b5f9080931bedeadea67db2e0df39db06dba9b2088d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:55 compute-0 podman[100380]: 2025-10-01 13:10:55.865461787 +0000 UTC m=+0.174421205 container init d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 01 13:10:55 compute-0 podman[100380]: 2025-10-01 13:10:55.874791842 +0000 UTC m=+0.183751300 container start d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:55 compute-0 podman[100380]: 2025-10-01 13:10:55.878589628 +0000 UTC m=+0.187549056 container attach d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d749c1a69fec47df0515eef4d683f1f3832b3960d802435ab703163a3b1b5562/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d749c1a69fec47df0515eef4d683f1f3832b3960d802435ab703163a3b1b5562/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d749c1a69fec47df0515eef4d683f1f3832b3960d802435ab703163a3b1b5562/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d749c1a69fec47df0515eef4d683f1f3832b3960d802435ab703163a3b1b5562/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.rmxmfa supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:55 compute-0 podman[100411]: 2025-10-01 13:10:55.813513844 +0000 UTC m=+0.037146573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:55 compute-0 podman[100411]: 2025-10-01 13:10:55.919228855 +0000 UTC m=+0.142861554 container init aad65a249f3d9c8d2205ff4de98f33b1f76ef8b51f5bb3dd231b6c5029e0c097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-rgw-rgw-compute-0-rmxmfa, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:55 compute-0 podman[100411]: 2025-10-01 13:10:55.925342502 +0000 UTC m=+0.148975171 container start aad65a249f3d9c8d2205ff4de98f33b1f76ef8b51f5bb3dd231b6c5029e0c097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-rgw-rgw-compute-0-rmxmfa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:55 compute-0 bash[100411]: aad65a249f3d9c8d2205ff4de98f33b1f76ef8b51f5bb3dd231b6c5029e0c097
Oct 01 13:10:55 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.rmxmfa for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:10:55 compute-0 sudo[100101]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:55 compute-0 radosgw[100440]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:10:55 compute-0 radosgw[100440]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 01 13:10:55 compute-0 radosgw[100440]: framework: beast
Oct 01 13:10:55 compute-0 radosgw[100440]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct 01 13:10:55 compute-0 radosgw[100440]: init_numa not setting numa affinity
Oct 01 13:10:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 01 13:10:56 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 01 13:10:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:56 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev 70c62511-9186-47e2-a676-587875b6c394 (Updating rgw.rgw deployment (+1 -> 1))
Oct 01 13:10:56 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event 70c62511-9186-47e2-a676-587875b6c394 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct 01 13:10:56 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct 01 13:10:56 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct 01 13:10:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 01 13:10:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct 01 13:10:56 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 01 13:10:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:56 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev 1e9d9abe-f1a4-4c88-8515-a120df66529c (Updating mds.cephfs deployment (+1 -> 1))
Oct 01 13:10:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct 01 13:10:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 01 13:10:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 01 13:10:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:56 compute-0 ceph-mgr[75103]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.vhkcbm on compute-0
Oct 01 13:10:56 compute-0 ceph-mgr[75103]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.vhkcbm on compute-0
Oct 01 13:10:56 compute-0 sudo[100502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:56 compute-0 sudo[100502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:56 compute-0 sudo[100502]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:56 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct 01 13:10:56 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct 01 13:10:56 compute-0 sudo[100527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:56 compute-0 sudo[100527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:56 compute-0 sudo[100527]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:56 compute-0 sudo[100569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:56 compute-0 sudo[100569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:56 compute-0 sudo[100569]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:56 compute-0 sudo[100596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f
Oct 01 13:10:56 compute-0 sudo[100596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:56 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:10:56 compute-0 happy_aryabhata[100428]: 
Oct 01 13:10:56 compute-0 happy_aryabhata[100428]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct 01 13:10:56 compute-0 systemd[1]: libpod-d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c.scope: Deactivated successfully.
Oct 01 13:10:56 compute-0 conmon[100428]: conmon d7698c900df76c009868 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c.scope/container/memory.events
Oct 01 13:10:56 compute-0 podman[100380]: 2025-10-01 13:10:56.455469866 +0000 UTC m=+0.764429304 container died d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b83b68196ee0d7e869f8b5f9080931bedeadea67db2e0df39db06dba9b2088d7-merged.mount: Deactivated successfully.
Oct 01 13:10:56 compute-0 podman[100380]: 2025-10-01 13:10:56.504306154 +0000 UTC m=+0.813265582 container remove d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c (image=quay.io/ceph/ceph:v18, name=happy_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 13:10:56 compute-0 systemd[1]: libpod-conmon-d7698c900df76c00986811ab451a7f0cfcfd9c807dd892a2c3a4065a1058485c.scope: Deactivated successfully.
Oct 01 13:10:56 compute-0 sudo[100315]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct 01 13:10:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct 01 13:10:56 compute-0 ceph-mon[74802]: 2.2 scrub starts
Oct 01 13:10:56 compute-0 ceph-mon[74802]: 2.2 scrub ok
Oct 01 13:10:56 compute-0 ceph-mon[74802]: osdmap e38: 3 total, 3 up, 3 in
Oct 01 13:10:56 compute-0 ceph-mon[74802]: pgmap v93: 177 pgs: 139 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:56 compute-0 ceph-mon[74802]: Saving service rgw.rgw spec with placement compute-0
Oct 01 13:10:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 01 13:10:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.vhkcbm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 01 13:10:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:56 compute-0 ceph-mon[74802]: Deploying daemon mds.cephfs.compute-0.vhkcbm on compute-0
Oct 01 13:10:56 compute-0 ceph-mon[74802]: from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:10:56 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct 01 13:10:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct 01 13:10:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.5( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.9( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.7( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.3( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=37/39 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 39 pg[6.a( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [0] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:56 compute-0 podman[100674]: 2025-10-01 13:10:56.723821603 +0000 UTC m=+0.050134799 container create 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:10:56 compute-0 systemd[1]: Started libpod-conmon-8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5.scope.
Oct 01 13:10:56 compute-0 podman[100674]: 2025-10-01 13:10:56.700961326 +0000 UTC m=+0.027274532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:56 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:56 compute-0 podman[100674]: 2025-10-01 13:10:56.829387819 +0000 UTC m=+0.155701065 container init 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:56 compute-0 podman[100674]: 2025-10-01 13:10:56.839189609 +0000 UTC m=+0.165502785 container start 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:10:56 compute-0 pensive_bhabha[100690]: 167 167
Oct 01 13:10:56 compute-0 podman[100674]: 2025-10-01 13:10:56.842947452 +0000 UTC m=+0.169260658 container attach 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:10:56 compute-0 systemd[1]: libpod-8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5.scope: Deactivated successfully.
Oct 01 13:10:56 compute-0 podman[100674]: 2025-10-01 13:10:56.844005445 +0000 UTC m=+0.170318641 container died 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-69aa8480cd30e24b90eebd85b9b47518906d2ec958b1747f44e0df70bc605aec-merged.mount: Deactivated successfully.
Oct 01 13:10:56 compute-0 podman[100674]: 2025-10-01 13:10:56.884380885 +0000 UTC m=+0.210694081 container remove 8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:10:56 compute-0 systemd[1]: libpod-conmon-8a7e454a3de4ff6f93ee78b6953e7d7fdb26b72a797dbe90d3256c33b04c63f5.scope: Deactivated successfully.
Oct 01 13:10:56 compute-0 systemd[1]: Reloading.
Oct 01 13:10:57 compute-0 systemd-rc-local-generator[100737]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:10:57 compute-0 systemd-sysv-generator[100740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:10:57 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 01 13:10:57 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 01 13:10:57 compute-0 systemd[1]: Reloading.
Oct 01 13:10:57 compute-0 systemd-rc-local-generator[100802]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:10:57 compute-0 systemd-sysv-generator[100805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:10:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 39 pg[8.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:57 compute-0 sudo[100774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezwmhmcnudnaxnbbfiailqlrozgujunl ; /usr/bin/python3'
Oct 01 13:10:57 compute-0 sudo[100774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:57 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.vhkcbm for eb4b6ead-01d1-53b3-a52a-47dcc600555f...
Oct 01 13:10:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct 01 13:10:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 01 13:10:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct 01 13:10:57 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct 01 13:10:57 compute-0 ceph-mon[74802]: 2.3 scrub starts
Oct 01 13:10:57 compute-0 ceph-mon[74802]: 2.3 scrub ok
Oct 01 13:10:57 compute-0 ceph-mon[74802]: 4.2 scrub starts
Oct 01 13:10:57 compute-0 ceph-mon[74802]: 4.2 scrub ok
Oct 01 13:10:57 compute-0 ceph-mon[74802]: osdmap e39: 3 total, 3 up, 3 in
Oct 01 13:10:57 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 01 13:10:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 40 pg[8.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v96: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:57 compute-0 python3[100813]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:57 compute-0 ceph-mgr[75103]: [progress INFO root] Writing back 10 completed events
Oct 01 13:10:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 13:10:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:57 compute-0 podman[100861]: 2025-10-01 13:10:57.832823396 +0000 UTC m=+0.048900702 container create 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:57 compute-0 podman[100862]: 2025-10-01 13:10:57.841269503 +0000 UTC m=+0.052844322 container create 14f330f0450cafcfb15628aeff970024e9cdb619b7fff7233f08911fbe956283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mds-cephfs-compute-0-vhkcbm, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 01 13:10:57 compute-0 systemd[1]: Started libpod-conmon-5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78.scope.
Oct 01 13:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03807447dc2c704e5561140db9a00570c603fb3664c554220555f0eb126c6c01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03807447dc2c704e5561140db9a00570c603fb3664c554220555f0eb126c6c01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03807447dc2c704e5561140db9a00570c603fb3664c554220555f0eb126c6c01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03807447dc2c704e5561140db9a00570c603fb3664c554220555f0eb126c6c01/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.vhkcbm supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff5a9978d32eed052368acfbcad1fa254e942b7ded155ae265358702217a4ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff5a9978d32eed052368acfbcad1fa254e942b7ded155ae265358702217a4ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:57 compute-0 podman[100861]: 2025-10-01 13:10:57.811779714 +0000 UTC m=+0.027856990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:57 compute-0 podman[100862]: 2025-10-01 13:10:57.813025702 +0000 UTC m=+0.024600541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:10:57 compute-0 podman[100862]: 2025-10-01 13:10:57.909621595 +0000 UTC m=+0.121196414 container init 14f330f0450cafcfb15628aeff970024e9cdb619b7fff7233f08911fbe956283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mds-cephfs-compute-0-vhkcbm, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:10:57 compute-0 podman[100862]: 2025-10-01 13:10:57.915721952 +0000 UTC m=+0.127296771 container start 14f330f0450cafcfb15628aeff970024e9cdb619b7fff7233f08911fbe956283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mds-cephfs-compute-0-vhkcbm, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:10:57 compute-0 podman[100861]: 2025-10-01 13:10:57.91733008 +0000 UTC m=+0.133407406 container init 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:57 compute-0 bash[100862]: 14f330f0450cafcfb15628aeff970024e9cdb619b7fff7233f08911fbe956283
Oct 01 13:10:57 compute-0 podman[100861]: 2025-10-01 13:10:57.924025134 +0000 UTC m=+0.140102400 container start 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:10:57 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.vhkcbm for eb4b6ead-01d1-53b3-a52a-47dcc600555f.
Oct 01 13:10:57 compute-0 podman[100861]: 2025-10-01 13:10:57.92780475 +0000 UTC m=+0.143882056 container attach 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:57 compute-0 ansible-async_wrapper.py[99882]: Done in kid B.
Oct 01 13:10:57 compute-0 sudo[100596]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:57 compute-0 ceph-mds[100898]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:10:57 compute-0 ceph-mds[100898]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 01 13:10:57 compute-0 ceph-mds[100898]: main not setting numa affinity
Oct 01 13:10:57 compute-0 ceph-mds[100898]: pidfile_write: ignore empty --pid-file
Oct 01 13:10:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:57 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mds-cephfs-compute-0-vhkcbm[100893]: starting mds.cephfs.compute-0.vhkcbm at 
Oct 01 13:10:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:57 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Updating MDS map to version 2 from mon.0
Oct 01 13:10:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 01 13:10:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:57 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev 1e9d9abe-f1a4-4c88-8515-a120df66529c (Updating mds.cephfs deployment (+1 -> 1))
Oct 01 13:10:57 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event 1e9d9abe-f1a4-4c88-8515-a120df66529c (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct 01 13:10:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct 01 13:10:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:58 compute-0 sudo[100918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:58 compute-0 sudo[100918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:58 compute-0 sudo[100918]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:58 compute-0 sudo[100943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:10:58 compute-0 sudo[100943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:58 compute-0 sudo[100943]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:58 compute-0 sudo[100968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:58 compute-0 sudo[100968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:58 compute-0 sudo[100968]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:58 compute-0 sudo[100997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:10:58 compute-0 sudo[100997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:58 compute-0 sudo[100997]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:58 compute-0 sudo[101037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:10:58 compute-0 sudo[101037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:58 compute-0 sudo[101037]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:58 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:10:58 compute-0 elated_carver[100891]: 
Oct 01 13:10:58 compute-0 elated_carver[100891]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct 01 13:10:58 compute-0 sudo[101062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:10:58 compute-0 sudo[101062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:10:58 compute-0 systemd[1]: libpod-5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78.scope: Deactivated successfully.
Oct 01 13:10:58 compute-0 podman[100861]: 2025-10-01 13:10:58.464290637 +0000 UTC m=+0.680367933 container died 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-fff5a9978d32eed052368acfbcad1fa254e942b7ded155ae265358702217a4ee-merged.mount: Deactivated successfully.
Oct 01 13:10:58 compute-0 podman[100861]: 2025-10-01 13:10:58.511871127 +0000 UTC m=+0.727948413 container remove 5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78 (image=quay.io/ceph/ceph:v18, name=elated_carver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 13:10:58 compute-0 systemd[1]: libpod-conmon-5e868367c53792f475d497902198a98b578b3dd9a86e080250c80b859ba96b78.scope: Deactivated successfully.
Oct 01 13:10:58 compute-0 sudo[100774]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 01 13:10:58 compute-0 ceph-mon[74802]: 4.3 scrub starts
Oct 01 13:10:58 compute-0 ceph-mon[74802]: 4.3 scrub ok
Oct 01 13:10:58 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 01 13:10:58 compute-0 ceph-mon[74802]: osdmap e40: 3 total, 3 up, 3 in
Oct 01 13:10:58 compute-0 ceph-mon[74802]: pgmap v96: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:58 compute-0 ceph-mon[74802]: from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e3 new map
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-01T13:10:42.681473+0000
                                           modified        2025-10-01T13:10:42.681508+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.vhkcbm{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] compat {c=[1],r=[1],i=[7ff]}]
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Updating MDS map to version 3 from mon.0
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Monitors have assigned me to become a standby.
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] up:boot
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] as mds.0
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.vhkcbm assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.vhkcbm"} v 0) v1
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.vhkcbm"}]: dispatch
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e3 all = 0
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e4 new map
Oct 01 13:10:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-01T13:10:42.681473+0000
                                           modified        2025-10-01T13:10:58.977008+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.vhkcbm{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Updating MDS map to version 4 from mon.0
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x1
Oct 01 13:10:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vhkcbm=up:creating}
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x100
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x600
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x601
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x602
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x603
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x604
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x605
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x606
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x607
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x608
Oct 01 13:10:58 compute-0 ceph-mds[100898]: mds.0.cache creating system inode with ino:0x609
Oct 01 13:10:59 compute-0 ceph-mds[100898]: mds.0.4 creating_done
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.vhkcbm is now active in filesystem cephfs as rank 0
Oct 01 13:10:59 compute-0 podman[101172]: 2025-10-01 13:10:59.07545426 +0000 UTC m=+0.076311396 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:10:59 compute-0 podman[101172]: 2025-10-01 13:10:59.193513688 +0000 UTC m=+0.194370844 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:10:59 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct 01 13:10:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 41 pg[9.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:10:59 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct 01 13:10:59 compute-0 sudo[101293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfebbcicinuwaovrikoisygrowxyausv ; /usr/bin/python3'
Oct 01 13:10:59 compute-0 sudo[101293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:10:59 compute-0 python3[101297]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct 01 13:10:59 compute-0 ceph-mon[74802]: osdmap e41: 3 total, 3 up, 3 in
Oct 01 13:10:59 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mds.? [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] up:boot
Oct 01 13:10:59 compute-0 ceph-mon[74802]: daemon mds.cephfs.compute-0.vhkcbm assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 01 13:10:59 compute-0 ceph-mon[74802]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 01 13:10:59 compute-0 ceph-mon[74802]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 01 13:10:59 compute-0 ceph-mon[74802]: Cluster is now healthy
Oct 01 13:10:59 compute-0 ceph-mon[74802]: fsmap cephfs:0 1 up:standby
Oct 01 13:10:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.vhkcbm"}]: dispatch
Oct 01 13:10:59 compute-0 ceph-mon[74802]: fsmap cephfs:1 {0=cephfs.compute-0.vhkcbm=up:creating}
Oct 01 13:10:59 compute-0 ceph-mon[74802]: daemon mds.cephfs.compute-0.vhkcbm is now active in filesystem cephfs as rank 0
Oct 01 13:10:59 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 01 13:10:59 compute-0 ceph-mon[74802]: osdmap e42: 3 total, 3 up, 3 in
Oct 01 13:10:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 42 pg[9.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:10:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v99: 179 pgs: 2 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:10:59 compute-0 podman[101328]: 2025-10-01 13:10:59.740906287 +0000 UTC m=+0.046401654 container create 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:10:59 compute-0 systemd[1]: Started libpod-conmon-6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0.scope.
Oct 01 13:10:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291c74d384e92641d8f36aac3fae5c96f84a7b01981c549d948911512f3a6145/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291c74d384e92641d8f36aac3fae5c96f84a7b01981c549d948911512f3a6145/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:10:59 compute-0 podman[101328]: 2025-10-01 13:10:59.72687002 +0000 UTC m=+0.032365417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:10:59 compute-0 podman[101328]: 2025-10-01 13:10:59.825487125 +0000 UTC m=+0.130982582 container init 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:10:59 compute-0 podman[101328]: 2025-10-01 13:10:59.835222981 +0000 UTC m=+0.140718368 container start 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:10:59 compute-0 podman[101328]: 2025-10-01 13:10:59.838985526 +0000 UTC m=+0.144480943 container attach 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:10:59 compute-0 sudo[101062]: pam_unix(sudo:session): session closed for user root
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:10:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 0d850c8e-f707-4904-b299-68e7cd43a264 does not exist
Oct 01 13:10:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fd509925-4ac6-40c2-9fc8-572e1c44b522 does not exist
Oct 01 13:10:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 05b89096-e735-4f4f-827b-5d766f5a7532 does not exist
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e5 new map
Oct 01 13:10:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-01T13:10:42.681473+0000
                                           modified        2025-10-01T13:10:59.983402+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.vhkcbm{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] up:active
Oct 01 13:10:59 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.vhkcbm=up:active}
Oct 01 13:10:59 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm Updating MDS map to version 5 from mon.0
Oct 01 13:10:59 compute-0 ceph-mds[100898]: mds.0.4 handle_mds_map i am now mds.0.4
Oct 01 13:10:59 compute-0 ceph-mds[100898]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct 01 13:10:59 compute-0 ceph-mds[100898]: mds.0.4 recovery_done -- successful recovery!
Oct 01 13:10:59 compute-0 ceph-mds[100898]: mds.0.4 active_start
Oct 01 13:11:00 compute-0 sudo[101386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:00 compute-0 sudo[101386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:00 compute-0 sudo[101386]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:00 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 01 13:11:00 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 01 13:11:00 compute-0 sudo[101413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:11:00 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct 01 13:11:00 compute-0 sudo[101413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:00 compute-0 sudo[101413]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:00 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct 01 13:11:00 compute-0 sudo[101439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:00 compute-0 sudo[101439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:00 compute-0 sudo[101439]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:00 compute-0 sudo[101482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:11:00 compute-0 sudo[101482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:00 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:11:00 compute-0 beautiful_shamir[101364]: 
Oct 01 13:11:00 compute-0 beautiful_shamir[101364]: [{"container_id": "0abeef01559d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.45%", "created": "2025-10-01T13:09:28.103793Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-01T13:09:28.148660Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.931975Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-10-01T13:09:27.988610Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@crash.compute-0", "version": "18.2.7"}, {"container_id": "14f330f0450c", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "7.17%", "created": "2025-10-01T13:10:57.930405Z", "daemon_id": "cephfs.compute-0.vhkcbm", "daemon_name": "mds.cephfs.compute-0.vhkcbm", "daemon_type": "mds", "events": ["2025-10-01T13:10:57.980569Z daemon:mds.cephfs.compute-0.vhkcbm [INFO] \"Deployed mds.cephfs.compute-0.vhkcbm on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.932602Z", "memory_usage": 13495173, "ports": [], "service_name": "mds.cephfs", "started": "2025-10-01T13:10:57.817630Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mds.cephfs.compute-0.vhkcbm", "version": "18.2.7"}, {"container_id": "d581f7f0a3e6", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "25.95%", "created": "2025-10-01T13:08:02.686455Z", "daemon_id": "compute-0.puxjpb", "daemon_name": "mgr.compute-0.puxjpb", "daemon_type": "mgr", "events": ["2025-10-01T13:09:32.484128Z daemon:mgr.compute-0.puxjpb [INFO] \"Reconfigured mgr.compute-0.puxjpb on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.931834Z", "memory_usage": 549768396, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-01T13:08:02.573938Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mgr.compute-0.puxjpb", "version": "18.2.7"}, {"container_id": "dfadbb96d7d5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.11%", "created": "2025-10-01T13:07:57.222279Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-01T13:09:31.752707Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.931614Z", "memory_request": 2147483648, "memory_usage": 41450209, "ports": [], "service_name": "mon", "started": "2025-10-01T13:08:00.154422Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@mon.compute-0", "version": "18.2.7"}, {"container_id": "ae2fd024bf44", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.44%", "created": "2025-10-01T13:09:54.436549Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-01T13:09:54.480189Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.932104Z", "memory_request": 4294967296, "memory_usage": 59087257, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T13:09:54.329396Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@osd.0", "version": "18.2.7"}, {"container_id": "c7bfaf4b1718", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.71%", "created": "2025-10-01T13:09:59.354689Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-01T13:09:59.425766Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.932229Z", "memory_request": 4294967296, "memory_usage": 61708697, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T13:09:59.164391Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@osd.1", "version": "18.2.7"}, {"container_id": "1866f3a29a4e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.78%", "created": "2025-10-01T13:10:04.318830Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-01T13:10:04.393416Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-01T13:10:59.932351Z", "memory_request": 4294967296, "memory_usage": 60083404, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-01T13:10:04.117457Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@osd.2", "version": "18.2.7"}, {"container_id": "aad65a249f3d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.13%", "created": "2025-10-01T13:10:55.942344Z", "daemon_id": "rgw.compute-0.rmxmfa", "daemon_name": "rgw.rgw.compute-0.rmxmfa", "daemon_type": "rgw", "events": ["2025-10-01T13:10:56.000789Z daemon:rgw.rgw.compute-0.rmxmfa [INFO] \"Deployed rgw.rgw.compute-0.rmxmfa on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-10-01T13:10:59.932477Z", "memory_usage": 17962106, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-10-01T13:10:55.822894Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f@rgw.rgw.compute-0.rmxmfa", "version": "18.2.7"}]
Oct 01 13:11:00 compute-0 systemd[1]: libpod-6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0.scope: Deactivated successfully.
Oct 01 13:11:00 compute-0 podman[101328]: 2025-10-01 13:11:00.40590729 +0000 UTC m=+0.711402657 container died 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 13:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-291c74d384e92641d8f36aac3fae5c96f84a7b01981c549d948911512f3a6145-merged.mount: Deactivated successfully.
Oct 01 13:11:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:00 compute-0 podman[101328]: 2025-10-01 13:11:00.447836829 +0000 UTC m=+0.753332206 container remove 6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0 (image=quay.io/ceph/ceph:v18, name=beautiful_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:11:00 compute-0 systemd[1]: libpod-conmon-6559262c4fe265653d81648ac6f0d4d5570fc00528a00c572f0080f1d3b899e0.scope: Deactivated successfully.
Oct 01 13:11:00 compute-0 sudo[101293]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:00 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 01 13:11:00 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 01 13:11:00 compute-0 rsyslogd[1009]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "0abeef01559d", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 01 13:11:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct 01 13:11:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct 01 13:11:00 compute-0 podman[101560]: 2025-10-01 13:11:00.722601871 +0000 UTC m=+0.061295299 container create 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:11:00 compute-0 ceph-mon[74802]: 3.2 scrub starts
Oct 01 13:11:00 compute-0 ceph-mon[74802]: 3.2 scrub ok
Oct 01 13:11:00 compute-0 ceph-mon[74802]: pgmap v99: 179 pgs: 2 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:11:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:11:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:11:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:11:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:11:00 compute-0 ceph-mon[74802]: mds.? [v2:192.168.122.100:6814/2185144476,v1:192.168.122.100:6815/2185144476] up:active
Oct 01 13:11:00 compute-0 ceph-mon[74802]: fsmap cephfs:1 {0=cephfs.compute-0.vhkcbm=up:active}
Oct 01 13:11:00 compute-0 ceph-mon[74802]: from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 01 13:11:00 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct 01 13:11:00 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 43 pg[10.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [2] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 01 13:11:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 01 13:11:00 compute-0 systemd[1]: Started libpod-conmon-87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013.scope.
Oct 01 13:11:00 compute-0 podman[101560]: 2025-10-01 13:11:00.694505304 +0000 UTC m=+0.033198782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:00 compute-0 podman[101560]: 2025-10-01 13:11:00.830482328 +0000 UTC m=+0.169175766 container init 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:00 compute-0 podman[101560]: 2025-10-01 13:11:00.841788953 +0000 UTC m=+0.180482381 container start 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:11:00 compute-0 podman[101560]: 2025-10-01 13:11:00.845556897 +0000 UTC m=+0.184250375 container attach 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:11:00 compute-0 gracious_diffie[101576]: 167 167
Oct 01 13:11:00 compute-0 systemd[1]: libpod-87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013.scope: Deactivated successfully.
Oct 01 13:11:00 compute-0 podman[101560]: 2025-10-01 13:11:00.848013683 +0000 UTC m=+0.186707111 container died 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4da63151bf7b0579ea858ff99e4f0cbee7ba30e611b38f0f201f8096b8e1a4d-merged.mount: Deactivated successfully.
Oct 01 13:11:00 compute-0 podman[101560]: 2025-10-01 13:11:00.896773648 +0000 UTC m=+0.235467076 container remove 87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:11:00 compute-0 systemd[1]: libpod-conmon-87f4685100bacc181719033108d4cab84314a819cb2d0f7bb5c1f7c03cd42013.scope: Deactivated successfully.
Oct 01 13:11:01 compute-0 podman[101598]: 2025-10-01 13:11:01.074597857 +0000 UTC m=+0.043967101 container create 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:11:01 compute-0 systemd[1]: Started libpod-conmon-2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6.scope.
Oct 01 13:11:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:01 compute-0 podman[101598]: 2025-10-01 13:11:01.056293199 +0000 UTC m=+0.025662483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:01 compute-0 podman[101598]: 2025-10-01 13:11:01.156642086 +0000 UTC m=+0.126011371 container init 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:11:01 compute-0 podman[101598]: 2025-10-01 13:11:01.167126417 +0000 UTC m=+0.136495671 container start 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:11:01 compute-0 podman[101598]: 2025-10-01 13:11:01.170192189 +0000 UTC m=+0.139561443 container attach 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:11:01 compute-0 sudo[101642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-couidgwhjkdmvgfqfarjdugfpwsonday ; /usr/bin/python3'
Oct 01 13:11:01 compute-0 sudo[101642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:11:01 compute-0 python3[101644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:11:01 compute-0 podman[101645]: 2025-10-01 13:11:01.518234755 +0000 UTC m=+0.053317756 container create 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:11:01 compute-0 systemd[1]: Started libpod-conmon-6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb.scope.
Oct 01 13:11:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:01 compute-0 podman[101645]: 2025-10-01 13:11:01.487157298 +0000 UTC m=+0.022240309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9e0d2e5d55ab934ed8f3f08d1fef841e90c9fc43ab471334fa7e887d00e396/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9e0d2e5d55ab934ed8f3f08d1fef841e90c9fc43ab471334fa7e887d00e396/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:01 compute-0 podman[101645]: 2025-10-01 13:11:01.602556374 +0000 UTC m=+0.137639405 container init 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:11:01 compute-0 podman[101645]: 2025-10-01 13:11:01.609512777 +0000 UTC m=+0.144595778 container start 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:11:01 compute-0 podman[101645]: 2025-10-01 13:11:01.61293908 +0000 UTC m=+0.148022161 container attach 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:11:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct 01 13:11:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v101: 180 pgs: 3 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:01 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 01 13:11:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct 01 13:11:01 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct 01 13:11:01 compute-0 ceph-mon[74802]: 2.4 scrub starts
Oct 01 13:11:01 compute-0 ceph-mon[74802]: 2.4 scrub ok
Oct 01 13:11:01 compute-0 ceph-mon[74802]: 4.4 scrub starts
Oct 01 13:11:01 compute-0 ceph-mon[74802]: 4.4 scrub ok
Oct 01 13:11:01 compute-0 ceph-mon[74802]: 3.3 scrub starts
Oct 01 13:11:01 compute-0 ceph-mon[74802]: 3.3 scrub ok
Oct 01 13:11:01 compute-0 ceph-mon[74802]: osdmap e43: 3 total, 3 up, 3 in
Oct 01 13:11:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 01 13:11:01 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 44 pg[10.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [2] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:01 compute-0 ceph-mon[74802]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:11:02 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 01 13:11:02 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 01 13:11:02 compute-0 infallible_ptolemy[101614]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:11:02 compute-0 infallible_ptolemy[101614]: --> relative data size: 1.0
Oct 01 13:11:02 compute-0 infallible_ptolemy[101614]: --> All data devices are unavailable
Oct 01 13:11:02 compute-0 systemd[1]: libpod-2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6.scope: Deactivated successfully.
Oct 01 13:11:02 compute-0 podman[101598]: 2025-10-01 13:11:02.148582383 +0000 UTC m=+1.117951627 container died 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:11:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct 01 13:11:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003077922' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 13:11:02 compute-0 exciting_ritchie[101660]: 
Oct 01 13:11:02 compute-0 exciting_ritchie[101660]: {"fsid":"eb4b6ead-01d1-53b3-a52a-47dcc600555f","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":181,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1759324211,"num_in_osds":3,"osd_in_since":1759324184,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":177},{"state_name":"unknown","count":3}],"num_pgs":180,"num_pools":10,"num_objects":2,"data_bytes":459280,"bytes_used":84111360,"bytes_avail":64327815168,"bytes_total":64411926528,"unknown_pgs_ratio":0.01666666753590107},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.vhkcbm","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-01T13:09:49.717098+0000","services":{}},"progress_events":{"117aa2cc-6633-48b9-9615-b57d148f5b2d":{"message":"Global Recovery Event (5s)\n      [===========================.] ","progress":0.99438202381134033,"add_to_ceph_s":true}}}
Oct 01 13:11:02 compute-0 systemd[1]: libpod-6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb.scope: Deactivated successfully.
Oct 01 13:11:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4ba11c38fec38ea81eb25184507f14e35ecae937f061c8932de0e5936e24a21-merged.mount: Deactivated successfully.
Oct 01 13:11:02 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 01 13:11:02 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 01 13:11:02 compute-0 podman[101645]: 2025-10-01 13:11:02.530216472 +0000 UTC m=+1.065299513 container died 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:11:02 compute-0 podman[101598]: 2025-10-01 13:11:02.529009894 +0000 UTC m=+1.498379178 container remove 2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:11:02 compute-0 sudo[101482]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:02 compute-0 sudo[101747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:02 compute-0 sudo[101747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:02 compute-0 sudo[101747]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:02 compute-0 sudo[101773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:11:02 compute-0 sudo[101773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:02 compute-0 sudo[101773]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d9e0d2e5d55ab934ed8f3f08d1fef841e90c9fc43ab471334fa7e887d00e396-merged.mount: Deactivated successfully.
Oct 01 13:11:02 compute-0 sudo[101798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:02 compute-0 sudo[101798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:02 compute-0 sudo[101798]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct 01 13:11:02 compute-0 podman[101645]: 2025-10-01 13:11:02.789847063 +0000 UTC m=+1.324930064 container remove 6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb (image=quay.io/ceph/ceph:v18, name=exciting_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct 01 13:11:02 compute-0 sudo[101823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:11:02 compute-0 sudo[101823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:02 compute-0 ceph-mgr[75103]: [progress INFO root] Writing back 11 completed events
Oct 01 13:11:02 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct 01 13:11:02 compute-0 sudo[101642]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 13:11:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 01 13:11:02 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 01 13:11:02 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 45 pg[11.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:02 compute-0 ceph-mon[74802]: pgmap v101: 180 pgs: 3 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:02 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3652552028' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 01 13:11:02 compute-0 ceph-mon[74802]: osdmap e44: 3 total, 3 up, 3 in
Oct 01 13:11:02 compute-0 ceph-mon[74802]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 01 13:11:02 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1003077922' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 01 13:11:02 compute-0 systemd[1]: libpod-conmon-2ea5c02b4462f0a8071cecbbe558bf3fb2fac8dae73a293d1ef9bccce9a19bb6.scope: Deactivated successfully.
Oct 01 13:11:02 compute-0 systemd[1]: libpod-conmon-6d18321cf9409c285c4f7f009b14099fa3a13e3367257839ac697a8081ad95cb.scope: Deactivated successfully.
Oct 01 13:11:02 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:03 compute-0 podman[101889]: 2025-10-01 13:11:03.153626988 +0000 UTC m=+0.049182250 container create 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:11:03 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Oct 01 13:11:03 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Oct 01 13:11:03 compute-0 systemd[1]: Started libpod-conmon-1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d.scope.
Oct 01 13:11:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:03 compute-0 podman[101889]: 2025-10-01 13:11:03.132855464 +0000 UTC m=+0.028410756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:03 compute-0 podman[101889]: 2025-10-01 13:11:03.233875702 +0000 UTC m=+0.129430964 container init 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:11:03 compute-0 podman[101889]: 2025-10-01 13:11:03.241949218 +0000 UTC m=+0.137504510 container start 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:11:03 compute-0 podman[101889]: 2025-10-01 13:11:03.245538868 +0000 UTC m=+0.141094160 container attach 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:11:03 compute-0 recursing_ishizaka[101905]: 167 167
Oct 01 13:11:03 compute-0 systemd[1]: libpod-1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d.scope: Deactivated successfully.
Oct 01 13:11:03 compute-0 podman[101889]: 2025-10-01 13:11:03.248372914 +0000 UTC m=+0.143928176 container died 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:11:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-cafe0e75f85f03ce1ac0202569a0b14dce26ffd9a8a030472023fd6a226ca3df-merged.mount: Deactivated successfully.
Oct 01 13:11:03 compute-0 podman[101889]: 2025-10-01 13:11:03.300549935 +0000 UTC m=+0.196105227 container remove 1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:11:03 compute-0 systemd[1]: libpod-conmon-1191cc06c3e0cb41d83cb728b9095fc4fb072df506668e1428ec75b9f2418e1d.scope: Deactivated successfully.
Oct 01 13:11:03 compute-0 podman[101929]: 2025-10-01 13:11:03.455818316 +0000 UTC m=+0.050279623 container create 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:11:03 compute-0 systemd[1]: Started libpod-conmon-3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d.scope.
Oct 01 13:11:03 compute-0 podman[101929]: 2025-10-01 13:11:03.435104914 +0000 UTC m=+0.029566251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:03 compute-0 podman[101929]: 2025-10-01 13:11:03.565149977 +0000 UTC m=+0.159611284 container init 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:11:03 compute-0 podman[101929]: 2025-10-01 13:11:03.572531172 +0000 UTC m=+0.166992469 container start 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:11:03 compute-0 podman[101929]: 2025-10-01 13:11:03.575929126 +0000 UTC m=+0.170390423 container attach 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:11:03 compute-0 sudo[101973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqmpntyhlvlyilzakuzeehdmadqxnrwd ; /usr/bin/python3'
Oct 01 13:11:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v104: 181 pgs: 1 unknown, 180 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1015 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 01 13:11:03 compute-0 sudo[101973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:11:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct 01 13:11:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 01 13:11:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct 01 13:11:03 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct 01 13:11:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 01 13:11:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 01 13:11:03 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 46 pg[11.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:03 compute-0 ceph-mon[74802]: 2.5 scrub starts
Oct 01 13:11:03 compute-0 ceph-mon[74802]: 2.5 scrub ok
Oct 01 13:11:03 compute-0 ceph-mon[74802]: 3.4 scrub starts
Oct 01 13:11:03 compute-0 ceph-mon[74802]: 3.4 scrub ok
Oct 01 13:11:03 compute-0 ceph-mon[74802]: osdmap e45: 3 total, 3 up, 3 in
Oct 01 13:11:03 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 01 13:11:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:03 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 01 13:11:03 compute-0 ceph-mon[74802]: osdmap e46: 3 total, 3 up, 3 in
Oct 01 13:11:03 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 01 13:11:03 compute-0 python3[101975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:11:03 compute-0 podman[101976]: 2025-10-01 13:11:03.976049928 +0000 UTC m=+0.063711582 container create f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:04 compute-0 systemd[1]: Started libpod-conmon-f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb.scope.
Oct 01 13:11:04 compute-0 podman[101976]: 2025-10-01 13:11:03.950104668 +0000 UTC m=+0.037766392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:11:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4257014eac995d7b0182c85518d66914a3ee31e428bf9a45902ad6c81942cca4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4257014eac995d7b0182c85518d66914a3ee31e428bf9a45902ad6c81942cca4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:04 compute-0 podman[101976]: 2025-10-01 13:11:04.079010075 +0000 UTC m=+0.166671729 container init f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:11:04 compute-0 podman[101976]: 2025-10-01 13:11:04.090060002 +0000 UTC m=+0.177721676 container start f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:11:04 compute-0 podman[101976]: 2025-10-01 13:11:04.094992763 +0000 UTC m=+0.182654447 container attach f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 13:11:04 compute-0 objective_napier[101945]: {
Oct 01 13:11:04 compute-0 objective_napier[101945]:     "0": [
Oct 01 13:11:04 compute-0 objective_napier[101945]:         {
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "devices": [
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "/dev/loop3"
Oct 01 13:11:04 compute-0 objective_napier[101945]:             ],
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_name": "ceph_lv0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_size": "21470642176",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "name": "ceph_lv0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "tags": {
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.cluster_name": "ceph",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.crush_device_class": "",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.encrypted": "0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.osd_id": "0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.type": "block",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.vdo": "0"
Oct 01 13:11:04 compute-0 objective_napier[101945]:             },
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "type": "block",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "vg_name": "ceph_vg0"
Oct 01 13:11:04 compute-0 objective_napier[101945]:         }
Oct 01 13:11:04 compute-0 objective_napier[101945]:     ],
Oct 01 13:11:04 compute-0 objective_napier[101945]:     "1": [
Oct 01 13:11:04 compute-0 objective_napier[101945]:         {
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "devices": [
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "/dev/loop4"
Oct 01 13:11:04 compute-0 objective_napier[101945]:             ],
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_name": "ceph_lv1",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_size": "21470642176",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "name": "ceph_lv1",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "tags": {
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.cluster_name": "ceph",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.crush_device_class": "",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.encrypted": "0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.osd_id": "1",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.type": "block",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.vdo": "0"
Oct 01 13:11:04 compute-0 objective_napier[101945]:             },
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "type": "block",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "vg_name": "ceph_vg1"
Oct 01 13:11:04 compute-0 objective_napier[101945]:         }
Oct 01 13:11:04 compute-0 objective_napier[101945]:     ],
Oct 01 13:11:04 compute-0 objective_napier[101945]:     "2": [
Oct 01 13:11:04 compute-0 objective_napier[101945]:         {
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "devices": [
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "/dev/loop5"
Oct 01 13:11:04 compute-0 objective_napier[101945]:             ],
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_name": "ceph_lv2",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_size": "21470642176",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "name": "ceph_lv2",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "tags": {
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.cluster_name": "ceph",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.crush_device_class": "",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.encrypted": "0",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.osd_id": "2",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.type": "block",
Oct 01 13:11:04 compute-0 objective_napier[101945]:                 "ceph.vdo": "0"
Oct 01 13:11:04 compute-0 objective_napier[101945]:             },
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "type": "block",
Oct 01 13:11:04 compute-0 objective_napier[101945]:             "vg_name": "ceph_vg2"
Oct 01 13:11:04 compute-0 objective_napier[101945]:         }
Oct 01 13:11:04 compute-0 objective_napier[101945]:     ]
Oct 01 13:11:04 compute-0 objective_napier[101945]: }
Oct 01 13:11:04 compute-0 systemd[1]: libpod-3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d.scope: Deactivated successfully.
Oct 01 13:11:04 compute-0 podman[101929]: 2025-10-01 13:11:04.413947731 +0000 UTC m=+1.008409048 container died 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:11:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-30b52093694e9de373a3fb86f678ef90cc0b04d52689e94029132fb8be4d5891-merged.mount: Deactivated successfully.
Oct 01 13:11:04 compute-0 podman[101929]: 2025-10-01 13:11:04.466670308 +0000 UTC m=+1.061131605 container remove 3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:11:04 compute-0 systemd[1]: libpod-conmon-3ae37ec80a7845be6e87d48ff228e5d5ef85b1973e44430e0ee58646e751728d.scope: Deactivated successfully.
Oct 01 13:11:04 compute-0 sudo[101823]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:04 compute-0 sudo[102029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:04 compute-0 sudo[102029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:04 compute-0 sudo[102029]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:04 compute-0 sudo[102054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:11:04 compute-0 sudo[102054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:04 compute-0 sudo[102054]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:04 compute-0 sudo[102079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:04 compute-0 sudo[102079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:04 compute-0 sudo[102079]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct 01 13:11:04 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/762836433' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:11:04 compute-0 happy_mahavira[101991]: 
Oct 01 13:11:04 compute-0 systemd[1]: libpod-f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb.scope: Deactivated successfully.
Oct 01 13:11:04 compute-0 happy_mahavira[101991]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.rmxmfa","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct 01 13:11:04 compute-0 podman[101976]: 2025-10-01 13:11:04.722227385 +0000 UTC m=+0.809889009 container died f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4257014eac995d7b0182c85518d66914a3ee31e428bf9a45902ad6c81942cca4-merged.mount: Deactivated successfully.
Oct 01 13:11:04 compute-0 sudo[102106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:11:04 compute-0 sudo[102106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:04 compute-0 podman[101976]: 2025-10-01 13:11:04.763208924 +0000 UTC m=+0.850870558 container remove f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb (image=quay.io/ceph/ceph:v18, name=happy_mahavira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:11:04 compute-0 systemd[1]: libpod-conmon-f76cb0a44985d1b20b065665427dd79a3224da2db8c93445f92540a6202579bb.scope: Deactivated successfully.
Oct 01 13:11:04 compute-0 sudo[101973]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct 01 13:11:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 01 13:11:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct 01 13:11:04 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct 01 13:11:04 compute-0 ceph-mon[74802]: 4.5 deep-scrub starts
Oct 01 13:11:04 compute-0 ceph-mon[74802]: 4.5 deep-scrub ok
Oct 01 13:11:04 compute-0 ceph-mon[74802]: pgmap v104: 181 pgs: 1 unknown, 180 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1015 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 01 13:11:04 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/762836433' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 01 13:11:04 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/954311754' entity='client.rgw.rgw.compute-0.rmxmfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 01 13:11:04 compute-0 ceph-mon[74802]: osdmap e47: 3 total, 3 up, 3 in
Oct 01 13:11:04 compute-0 radosgw[100440]: LDAP not started since no server URIs were provided in the configuration.
Oct 01 13:11:04 compute-0 radosgw[100440]: framework: beast
Oct 01 13:11:04 compute-0 radosgw[100440]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 01 13:11:04 compute-0 radosgw[100440]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 01 13:11:04 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-rgw-rgw-compute-0-rmxmfa[100435]: 2025-10-01T13:11:04.943+0000 7f62d0374940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 01 13:11:04 compute-0 radosgw[100440]: starting handler: beast
Oct 01 13:11:04 compute-0 radosgw[100440]: set uid:gid to 167:167 (ceph:ceph)
Oct 01 13:11:05 compute-0 radosgw[100440]: mgrc service_daemon_register rgw.14271 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.rmxmfa,kernel_description=#1 SMP PREEMPT_DYNAMIC Mon Sep 15 21:46:13 UTC 2025,kernel_version=5.14.0-617.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f267d240-c1e7-4ec2-8e2a-64c5ae3c7ead,zone_name=default,zonegroup_id=852c69ab-29aa-4b27-9f2a-563f30a89237,zonegroup_name=default}
Oct 01 13:11:05 compute-0 podman[102727]: 2025-10-01 13:11:05.075952543 +0000 UTC m=+0.037910686 container create 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:11:05 compute-0 systemd[1]: Started libpod-conmon-8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3.scope.
Oct 01 13:11:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:05 compute-0 podman[102727]: 2025-10-01 13:11:05.060583045 +0000 UTC m=+0.022541218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:05 compute-0 podman[102727]: 2025-10-01 13:11:05.159034975 +0000 UTC m=+0.120993138 container init 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:11:05 compute-0 podman[102727]: 2025-10-01 13:11:05.166295626 +0000 UTC m=+0.128253769 container start 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:11:05 compute-0 clever_jones[102743]: 167 167
Oct 01 13:11:05 compute-0 systemd[1]: libpod-8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3.scope: Deactivated successfully.
Oct 01 13:11:05 compute-0 podman[102727]: 2025-10-01 13:11:05.173407334 +0000 UTC m=+0.135365497 container attach 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:11:05 compute-0 podman[102727]: 2025-10-01 13:11:05.17462945 +0000 UTC m=+0.136587603 container died 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 13:11:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ec14fdfa957104fe55e05cfdc5c083bb67db4f014972e12c89c6a60c2f1a6d5-merged.mount: Deactivated successfully.
Oct 01 13:11:05 compute-0 podman[102727]: 2025-10-01 13:11:05.225094648 +0000 UTC m=+0.187052791 container remove 8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:05 compute-0 systemd[1]: libpod-conmon-8484dd4b73d964d6963240e2f8969fa16e83ccee1cb1ad36b31d2cc0a9ad4cf3.scope: Deactivated successfully.
Oct 01 13:11:05 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 01 13:11:05 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 01 13:11:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:05 compute-0 podman[102768]: 2025-10-01 13:11:05.443040979 +0000 UTC m=+0.061186785 container create 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:11:05 compute-0 systemd[1]: Started libpod-conmon-8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f.scope.
Oct 01 13:11:05 compute-0 podman[102768]: 2025-10-01 13:11:05.41288236 +0000 UTC m=+0.031028176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:05 compute-0 podman[102768]: 2025-10-01 13:11:05.565155861 +0000 UTC m=+0.183301667 container init 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:11:05 compute-0 podman[102768]: 2025-10-01 13:11:05.579391304 +0000 UTC m=+0.197537070 container start 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:11:05 compute-0 podman[102768]: 2025-10-01 13:11:05.582805728 +0000 UTC m=+0.200951534 container attach 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:11:05 compute-0 sudo[102813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wifsrlepbovyuahksbzvkvezgfeeyuvs ; /usr/bin/python3'
Oct 01 13:11:05 compute-0 sudo[102813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:11:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v107: 181 pgs: 1 unknown, 180 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 01 13:11:05 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 01 13:11:05 compute-0 ceph-mon[74802]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct 01 13:11:05 compute-0 ceph-mon[74802]: 3.5 scrub starts
Oct 01 13:11:05 compute-0 ceph-mon[74802]: 3.5 scrub ok
Oct 01 13:11:05 compute-0 ceph-mon[74802]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 01 13:11:05 compute-0 ceph-mon[74802]: Cluster is now healthy
Oct 01 13:11:05 compute-0 python3[102815]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:11:05 compute-0 podman[102816]: 2025-10-01 13:11:05.974825423 +0000 UTC m=+0.040760542 container create dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:11:06 compute-0 systemd[1]: Started libpod-conmon-dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317.scope.
Oct 01 13:11:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c390a144cc64b45983da30673f817bca0fba55a1058334eacff12ed91c839e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c390a144cc64b45983da30673f817bca0fba55a1058334eacff12ed91c839e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:06 compute-0 podman[102816]: 2025-10-01 13:11:05.959978991 +0000 UTC m=+0.025914140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:11:06 compute-0 podman[102816]: 2025-10-01 13:11:06.059185374 +0000 UTC m=+0.125120523 container init dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:11:06 compute-0 podman[102816]: 2025-10-01 13:11:06.065817546 +0000 UTC m=+0.131752665 container start dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:11:06 compute-0 podman[102816]: 2025-10-01 13:11:06.069088666 +0000 UTC m=+0.135023785 container attach dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:11:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct 01 13:11:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct 01 13:11:06 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Oct 01 13:11:06 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Oct 01 13:11:06 compute-0 gallant_fermi[102785]: {
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "osd_id": 0,
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "type": "bluestore"
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:     },
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "osd_id": 2,
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "type": "bluestore"
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:     },
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "osd_id": 1,
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:         "type": "bluestore"
Oct 01 13:11:06 compute-0 gallant_fermi[102785]:     }
Oct 01 13:11:06 compute-0 gallant_fermi[102785]: }
Oct 01 13:11:06 compute-0 systemd[1]: libpod-8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f.scope: Deactivated successfully.
Oct 01 13:11:06 compute-0 podman[102768]: 2025-10-01 13:11:06.567918535 +0000 UTC m=+1.186064301 container died 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:11:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct 01 13:11:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128680161' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 01 13:11:06 compute-0 youthful_albattani[102832]: mimic
Oct 01 13:11:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccf258735530560ec2975a9f02b53e3a0c1e0a526e2e8ca3ff5762424bee3def-merged.mount: Deactivated successfully.
Oct 01 13:11:06 compute-0 systemd[1]: libpod-dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317.scope: Deactivated successfully.
Oct 01 13:11:06 compute-0 podman[102816]: 2025-10-01 13:11:06.626010966 +0000 UTC m=+0.691946105 container died dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 13:11:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c390a144cc64b45983da30673f817bca0fba55a1058334eacff12ed91c839e2-merged.mount: Deactivated successfully.
Oct 01 13:11:06 compute-0 podman[102816]: 2025-10-01 13:11:06.807687402 +0000 UTC m=+0.873622531 container remove dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317 (image=quay.io/ceph/ceph:v18, name=youthful_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:11:06 compute-0 systemd[1]: libpod-conmon-dd3a838f62e2ca5792dda9d08a6869f9b943ed00a6cdd023ce6e5e7fd759f317.scope: Deactivated successfully.
Oct 01 13:11:06 compute-0 sudo[102813]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:06 compute-0 podman[102768]: 2025-10-01 13:11:06.834084956 +0000 UTC m=+1.452230762 container remove 8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_fermi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:11:06 compute-0 systemd[1]: libpod-conmon-8ee7e5931d5205c6ee9b7b9f5accf7f61035f61df4fa5484a4d080e2779ea33f.scope: Deactivated successfully.
Oct 01 13:11:06 compute-0 sudo[102106]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:11:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:11:06 compute-0 ceph-mon[74802]: pgmap v107: 181 pgs: 1 unknown, 180 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct 01 13:11:06 compute-0 ceph-mon[74802]: 2.6 scrub starts
Oct 01 13:11:06 compute-0 ceph-mon[74802]: 2.6 scrub ok
Oct 01 13:11:06 compute-0 ceph-mon[74802]: 3.6 deep-scrub starts
Oct 01 13:11:06 compute-0 ceph-mon[74802]: 3.6 deep-scrub ok
Oct 01 13:11:06 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/128680161' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 01 13:11:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c343aa81-eb55-47fd-bdde-694cc5d55585 does not exist
Oct 01 13:11:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7ff1afc3-160f-4931-a1a3-30de39992e3a does not exist
Oct 01 13:11:07 compute-0 sudo[102911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:07 compute-0 sudo[102911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:07 compute-0 sudo[102911]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:07 compute-0 sudo[102936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:11:07 compute-0 sudo[102936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:07 compute-0 sudo[102936]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:07 compute-0 sudo[102961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:07 compute-0 sudo[102961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:07 compute-0 sudo[102961]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:07 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 01 13:11:07 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 01 13:11:07 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Oct 01 13:11:07 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Oct 01 13:11:07 compute-0 sudo[102986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:11:07 compute-0 sudo[102986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:07 compute-0 sudo[102986]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:07 compute-0 sudo[103011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:07 compute-0 sudo[103011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:07 compute-0 sudo[103011]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:07 compute-0 sudo[103036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:11:07 compute-0 sudo[103036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:07 compute-0 sudo[103125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxwlqlbpbljhfldnluatinxvhknoiced ; /usr/bin/python3'
Oct 01 13:11:07 compute-0 sudo[103125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:11:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v108: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 8.7 KiB/s wr, 193 op/s
Oct 01 13:11:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event 117aa2cc-6633-48b9-9615-b57d148f5b2d (Global Recovery Event) in 15 seconds
Oct 01 13:11:07 compute-0 python3[103130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:11:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:07 compute-0 ceph-mon[74802]: 2.7 scrub starts
Oct 01 13:11:07 compute-0 ceph-mon[74802]: 2.7 scrub ok
Oct 01 13:11:07 compute-0 ceph-mon[74802]: 4.6 deep-scrub starts
Oct 01 13:11:07 compute-0 ceph-mon[74802]: 4.6 deep-scrub ok
Oct 01 13:11:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct 01 13:11:07 compute-0 podman[103158]: 2025-10-01 13:11:07.981791638 +0000 UTC m=+0.074891323 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 13:11:07 compute-0 podman[103165]: 2025-10-01 13:11:07.982046906 +0000 UTC m=+0.054163361 container create 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 13:11:07 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.611283302s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762802124s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.611252785s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762802124s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.611231804s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762802124s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.693221092s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.844810486s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.693167686s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.844810486s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.611141205s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762802124s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.610872269s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762786865s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.610814095s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762786865s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.693246841s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845291138s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.610735893s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762786865s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.610707283s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762786865s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.693194389s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845291138s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.691101074s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.844795227s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.691054344s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.844795227s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.607466698s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762596130s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.607438087s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762596130s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689908981s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845138550s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689885139s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845138550s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606488228s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761795044s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606466293s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761779785s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689832687s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845146179s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606427193s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761779785s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689791679s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845146179s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689792633s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845191956s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689774513s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845191956s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.607088089s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.762573242s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689687729s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845214844s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606253624s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761779785s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689668655s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845214844s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606224060s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761779785s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.607014656s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.762573242s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689692497s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845367432s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689534187s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845207214s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689671516s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845367432s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689507484s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845207214s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.606098175s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761795044s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689553261s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845306396s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689526558s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845306396s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605860710s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761695862s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605841637s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761695862s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689403534s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845283508s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689414024s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845375061s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689373016s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845283508s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605442047s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761413574s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605194092s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761169434s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605110168s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761123657s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605164528s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761169434s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605414391s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761413574s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.605075836s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761123657s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689383507s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845375061s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604965210s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761116028s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689256668s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845428467s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689154625s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845352173s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604942322s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761116028s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604895592s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761108398s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689238548s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845428467s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689103127s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845352173s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.689115524s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845420837s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604805946s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761108398s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604825020s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.761154175s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.604804993s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.761154175s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688936234s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845443726s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688903809s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845436096s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603878021s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.760383606s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688879013s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845443726s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603754044s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.760383606s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688836098s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845436096s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688723564s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845504761s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688652992s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.845504761s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688632011s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845504761s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603507042s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.760368347s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688696861s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845504761s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603458405s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.760368347s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688436508s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.845420837s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.596887589s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.754013062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603412628s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.760566711s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.596863747s) [2] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.754013062s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688902855s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 78.846031189s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.603388786s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.760566711s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=11.688844681s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.846031189s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.602959633s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 75.760559082s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48 pruub=8.602933884s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.760559082s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.1a( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.1b( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.1e( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.1d( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.e( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.8( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.1f( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.c( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.7( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.5( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.1( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.4( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.2( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.5( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.8( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.c( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.a( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.e( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.1( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.15( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.11( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.18( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.11( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.9( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.6( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.3( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.6( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[7.1c( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.589295387s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.800079346s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.589271545s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.800079346s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.590354919s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801208496s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.590324402s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801208496s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.590507507s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801475525s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.590487480s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801475525s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.3( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.584633827s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795669556s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.584611893s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795669556s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.f( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583739281s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795593262s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583705902s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795593262s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583567619s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795600891s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583550453s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795600891s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.a( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.589060783s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801193237s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.589043617s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801193237s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583275795s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795547485s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.583257675s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795547485s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588729858s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801284790s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588774681s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801338196s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588702202s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801284790s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588756561s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801338196s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.589044571s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801727295s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582899094s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795539856s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.589032173s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801727295s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582836151s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795539856s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588561058s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801330566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588545799s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801330566s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588485718s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801338196s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588474274s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801338196s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582536697s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795402527s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582480431s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795402527s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582041740s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795143127s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588356018s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801506042s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582018852s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795143127s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588335037s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801506042s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582125664s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795425415s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582107544s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795425415s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588095665s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801467896s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581447601s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794876099s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581975937s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795379639s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.582921982s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795661926s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581433296s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794876099s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581923485s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795379639s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587980270s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801498413s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587966919s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801498413s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581234932s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794807434s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581216812s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794807434s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588148117s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801757812s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588127136s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801757812s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581123352s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794807434s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581105232s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794807434s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587775230s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801521301s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580997467s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794784546s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587757111s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801521301s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580979347s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794784546s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587673187s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801551819s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580879211s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794776917s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587644577s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801551819s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580860138s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794776917s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.17( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.13( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587603569s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801628113s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.588075638s) [0] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801467896s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587579727s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801628113s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580642700s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794715881s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580621719s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794715881s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587475777s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801628113s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587457657s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801628113s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580371857s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794563293s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580352783s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794563293s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580206871s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794502258s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580180168s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794502258s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587282181s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801635742s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580199242s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794555664s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587265968s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801635742s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580146790s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794555664s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.579926491s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794464111s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581134796s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.795700073s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.579904556s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794464111s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.581110954s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795700073s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.12( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587009430s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801696777s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586990356s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801696777s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586926460s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801666260s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586906433s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801666260s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.579672813s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active pruub 77.794448853s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586896896s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 71.801696777s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.579648972s) [0] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.794448853s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586878777s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.801696777s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[7.1b( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587798119s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753082275s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587710381s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753082275s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587624550s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753120422s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587596893s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753120422s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587339401s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753013611s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587310791s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753013611s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.587023735s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752922058s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586956978s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752922058s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586861610s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752922058s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586841583s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752967834s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586811066s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752967834s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586739540s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752922058s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.693970680s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'3 lcod 41'2 mlcod 41'2 active pruub 84.860397339s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586507797s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752914429s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.693925858s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'3 lcod 41'2 mlcod 0'0 unknown NOTIFY pruub 84.860397339s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48 pruub=15.580777168s) [1] r=-1 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.795661926s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=37/39 n=3 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.693649292s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'4 lcod 41'4 mlcod 41'4 active pruub 84.860374451s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586191177s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752914429s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=37/39 n=3 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.693579674s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'4 lcod 41'4 mlcod 0'0 unknown NOTIFY pruub 84.860374451s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.586101532s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752929688s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585674286s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752548218s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585650444s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752548218s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585943222s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752929688s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585941315s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753005981s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.3( v 41'2 (0'0,41'2] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.688312531s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'2 lcod 41'1 mlcod 41'1 active pruub 84.855400085s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.3( v 41'2 (0'0,41'2] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.688279152s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'2 lcod 41'1 mlcod 0'0 unknown NOTIFY pruub 84.855400085s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.688035965s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 84.855316162s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.688012123s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.855316162s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585420609s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752792358s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585400581s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752792358s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.687851906s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'1 lcod 41'2 mlcod 41'2 active pruub 84.855308533s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585071564s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752540588s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.687819481s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'1 lcod 41'2 mlcod 0'0 unknown NOTIFY pruub 84.855308533s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585039139s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752540588s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584961891s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752487183s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584946632s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752487183s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584838867s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752479553s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584595680s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752265930s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584817886s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752479553s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584566116s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752265930s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584703445s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752494812s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584680557s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752494812s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.7( v 41'2 (0'0,41'2] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.687451363s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'2 lcod 41'1 mlcod 41'1 active pruub 84.855308533s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.585902214s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753005981s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584606171s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752555847s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584563255s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752555847s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.5( v 41'3 (0'0,41'3] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.686568260s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'3 lcod 41'2 mlcod 41'2 active pruub 84.854682922s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.5( v 41'3 (0'0,41'3] local-lis/les=37/39 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.686532974s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'3 lcod 41'2 mlcod 0'0 unknown NOTIFY pruub 84.854682922s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.9( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.686651230s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active pruub 84.854827881s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.7( v 41'2 (0'0,41'2] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.687254906s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=41'2 lcod 41'1 mlcod 0'0 unknown NOTIFY pruub 84.855308533s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[6.9( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48 pruub=12.686617851s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.854827881s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.583729744s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752014160s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.583708763s) [2] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752014160s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584540367s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.753028870s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.584517479s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.753028870s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.19( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.14( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.13( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.15( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.1b( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.11( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 48 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.581744194s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active pruub 81.752479553s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48 pruub=9.581666946s) [1] r=-1 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.752479553s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.4( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[5.7( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 48 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.1( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.f( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.c( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.1a( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[5.18( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 systemd[1]: Started libpod-conmon-2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6.scope.
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[6.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 podman[103165]: 2025-10-01 13:11:07.958167949 +0000 UTC m=+0.030284454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:11:08 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 48 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6e42b0cb15986298fc2efdccc361ad181d9aa2104ce99e545cd050011c1474/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6e42b0cb15986298fc2efdccc361ad181d9aa2104ce99e545cd050011c1474/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:08 compute-0 podman[103165]: 2025-10-01 13:11:08.103870429 +0000 UTC m=+0.175986944 container init 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:11:08 compute-0 podman[103158]: 2025-10-01 13:11:08.109001915 +0000 UTC m=+0.202101580 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:11:08 compute-0 podman[103165]: 2025-10-01 13:11:08.115411291 +0000 UTC m=+0.187527756 container start 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 13:11:08 compute-0 podman[103165]: 2025-10-01 13:11:08.15410934 +0000 UTC m=+0.226225815 container attach 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:11:08 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct 01 13:11:08 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct 01 13:11:08 compute-0 sshd-session[103166]: Invalid user user1 from 200.7.101.139 port 41620
Oct 01 13:11:08 compute-0 sshd-session[103166]: Received disconnect from 200.7.101.139 port 41620:11: Bye Bye [preauth]
Oct 01 13:11:08 compute-0 sshd-session[103166]: Disconnected from invalid user user1 200.7.101.139 port 41620 [preauth]
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct 01 13:11:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790165366' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 01 13:11:08 compute-0 confident_pascal[103197]: 
Oct 01 13:11:08 compute-0 confident_pascal[103197]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Oct 01 13:11:08 compute-0 systemd[1]: libpod-2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6.scope: Deactivated successfully.
Oct 01 13:11:08 compute-0 podman[103165]: 2025-10-01 13:11:08.747402858 +0000 UTC m=+0.819519343 container died 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 13:11:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d6e42b0cb15986298fc2efdccc361ad181d9aa2104ce99e545cd050011c1474-merged.mount: Deactivated successfully.
Oct 01 13:11:08 compute-0 podman[103165]: 2025-10-01 13:11:08.799836986 +0000 UTC m=+0.871953441 container remove 2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6 (image=quay.io/ceph/ceph:v18, name=confident_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:11:08 compute-0 systemd[1]: libpod-conmon-2de2090294579f5d172b154ccea1a5793c00c481be7e1825f4dc77479ba6d9b6.scope: Deactivated successfully.
Oct 01 13:11:08 compute-0 sudo[103125]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:08 compute-0 sudo[103036]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:11:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:11:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:11:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:11:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:11:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6ac94d59-47fe-471d-a999-71b2be2e2628 does not exist
Oct 01 13:11:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 0c3ac7b1-2121-41af-a09e-c26566b81f1e does not exist
Oct 01 13:11:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fd15f104-cc64-4c7a-a0d8-e6b4e19c6203 does not exist
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:11:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:11:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:11:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:11:08 compute-0 ceph-mon[74802]: pgmap v108: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 8.7 KiB/s wr, 193 op/s
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:11:08 compute-0 ceph-mon[74802]: osdmap e48: 3 total, 3 up, 3 in
Oct 01 13:11:08 compute-0 ceph-mon[74802]: 4.b scrub starts
Oct 01 13:11:08 compute-0 ceph-mon[74802]: 4.b scrub ok
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1790165366' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct 01 13:11:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct 01 13:11:09 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.11( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.1f( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.1b( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.13( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.1c( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.14( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.16( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.15( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.12( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.17( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.15( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.8( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.13( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.9( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.b( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.f( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.3( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.a( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.6( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.3( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.1f( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.5( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.2( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.3( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.f( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.4( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.6( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.2( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.18( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.1d( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.4( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.7( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.f( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.1b( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.1f( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[5.1e( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.18( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.19( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.c( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[2.1c( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.18( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.1c( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.11( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.16( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.13( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.e( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.1( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[7.9( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.11( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.18( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.11( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.15( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.a( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 49 pg[3.1( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [0] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.e( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.8( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.a( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.5( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.2( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.5( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.1( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.7( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.e( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.c( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.1d( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.1a( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[4.1b( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [2] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[7.1a( empty local-lis/les=48/49 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.1e( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 49 pg[3.8( empty local-lis/les=48/49 n=0 ec=33/15 lis/c=33/33 les/c/f=35/35/0 sis=48) [2] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.d( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.d( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.f( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.f( v 41'5 lc 41'1 (0'0,41'5] local-lis/les=48/49 n=3 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.1( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.12( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.14( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.1b( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.2( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.10( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.11( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.17( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.13( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.12( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.15( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.16( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.9( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.1d( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.8( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.d( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.9( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.a( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.9( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.5( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.5( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.7( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.5( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.3( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.b( v 41'3 lc 0'0 (0'0,41'3] local-lis/les=48/49 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[4.4( empty local-lis/les=48/49 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.4( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.7( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.1( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.f( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.c( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.1a( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.18( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[5.19( empty local-lis/les=48/49 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[35,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.3( v 41'2 lc 0'0 (0'0,41'2] local-lis/les=48/49 n=2 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.6( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[2.9( empty local-lis/les=48/49 n=0 ec=33/13 lis/c=33/33 les/c/f=34/34/0 sis=48) [1] r=0 lpr=48 pi=[33,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 49 pg[6.7( v 41'2 lc 41'1 (0'0,41'2] local-lis/les=48/49 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=41'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:09 compute-0 sudo[103364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:09 compute-0 sudo[103364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:09 compute-0 sudo[103364]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:09 compute-0 sudo[103389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:11:09 compute-0 sudo[103389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:09 compute-0 sudo[103389]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:09 compute-0 sudo[103414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:09 compute-0 sudo[103414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:09 compute-0 sudo[103414]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:09 compute-0 sudo[103439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:11:09 compute-0 sudo[103439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:09 compute-0 podman[103505]: 2025-10-01 13:11:09.652782996 +0000 UTC m=+0.038612508 container create 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:11:09 compute-0 systemd[1]: Started libpod-conmon-208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604.scope.
Oct 01 13:11:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:09 compute-0 podman[103505]: 2025-10-01 13:11:09.635253852 +0000 UTC m=+0.021083384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v111: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.7 KiB/s wr, 186 op/s
Oct 01 13:11:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 01 13:11:09 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 01 13:11:09 compute-0 podman[103505]: 2025-10-01 13:11:09.743452598 +0000 UTC m=+0.129282150 container init 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 01 13:11:09 compute-0 podman[103505]: 2025-10-01 13:11:09.755735113 +0000 UTC m=+0.141564625 container start 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:09 compute-0 podman[103505]: 2025-10-01 13:11:09.758704913 +0000 UTC m=+0.144534465 container attach 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:11:09 compute-0 thirsty_chebyshev[103521]: 167 167
Oct 01 13:11:09 compute-0 systemd[1]: libpod-208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604.scope: Deactivated successfully.
Oct 01 13:11:09 compute-0 podman[103505]: 2025-10-01 13:11:09.764526271 +0000 UTC m=+0.150355823 container died 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe1fde2163a541d2752d350ea9e3afe895cb0b157a65f7c8881cf9c929ac5b42-merged.mount: Deactivated successfully.
Oct 01 13:11:09 compute-0 podman[103505]: 2025-10-01 13:11:09.805846559 +0000 UTC m=+0.191676071 container remove 208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:09 compute-0 systemd[1]: libpod-conmon-208cec87962ca6d664d52af156edd1cfbf04be09aa54b298e85c6dee3d88e604.scope: Deactivated successfully.
Oct 01 13:11:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:11:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:11:09 compute-0 ceph-mon[74802]: osdmap e49: 3 total, 3 up, 3 in
Oct 01 13:11:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 01 13:11:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct 01 13:11:10 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 01 13:11:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct 01 13:11:10 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct 01 13:11:10 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.e( v 41'3 (0'0,41'3] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700785637s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=41'2 lcod 41'2 mlcod 41'2 active pruub 84.860054016s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:10 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700929642s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active pruub 84.860275269s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:10 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.e( v 41'3 (0'0,41'3] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700670242s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=41'2 lcod 41'2 mlcod 0'0 unknown NOTIFY pruub 84.860054016s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:10 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700844765s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.860275269s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:10 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700228691s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=0'0 lcod 0'0 mlcod 0'0 active pruub 84.860404968s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:10 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.6( v 45'1 (0'0,45'1] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.694858551s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=45'1 lcod 0'0 mlcod 0'0 active pruub 84.855392456s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:10 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.6( v 45'1 (0'0,45'1] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.694758415s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=45'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.855392456s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:10 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 50 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50 pruub=10.700108528s) [1] r=-1 lpr=50 pi=[37,50)/1 crt=0'0 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.860404968s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:10 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 50 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:10 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 50 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:10 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 50 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:10 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 50 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:10 compute-0 podman[103544]: 2025-10-01 13:11:10.030883346 +0000 UTC m=+0.069202940 container create 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:11:10 compute-0 systemd[1]: Started libpod-conmon-818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164.scope.
Oct 01 13:11:10 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct 01 13:11:10 compute-0 podman[103544]: 2025-10-01 13:11:10.005680738 +0000 UTC m=+0.044000422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:10 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct 01 13:11:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:10 compute-0 podman[103544]: 2025-10-01 13:11:10.146637553 +0000 UTC m=+0.184957167 container init 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:11:10 compute-0 podman[103544]: 2025-10-01 13:11:10.164633372 +0000 UTC m=+0.202952966 container start 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:11:10 compute-0 podman[103544]: 2025-10-01 13:11:10.168525 +0000 UTC m=+0.206844624 container attach 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:11:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:11 compute-0 ceph-mon[74802]: pgmap v111: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.7 KiB/s wr, 186 op/s
Oct 01 13:11:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 01 13:11:11 compute-0 ceph-mon[74802]: osdmap e50: 3 total, 3 up, 3 in
Oct 01 13:11:11 compute-0 ceph-mon[74802]: 2.c scrub starts
Oct 01 13:11:11 compute-0 ceph-mon[74802]: 2.c scrub ok
Oct 01 13:11:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct 01 13:11:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct 01 13:11:11 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct 01 13:11:11 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 51 pg[6.2( empty local-lis/les=50/51 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:11 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 51 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=50/51 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=41'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:11 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 51 pg[6.6( v 45'1 lc 0'0 (0'0,45'1] local-lis/les=50/51 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=45'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:11 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 51 pg[6.e( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=50/51 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=50) [1] r=0 lpr=50 pi=[37,50)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:11 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Oct 01 13:11:11 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Oct 01 13:11:11 compute-0 pensive_noether[103562]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:11:11 compute-0 pensive_noether[103562]: --> relative data size: 1.0
Oct 01 13:11:11 compute-0 pensive_noether[103562]: --> All data devices are unavailable
Oct 01 13:11:11 compute-0 systemd[1]: libpod-818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164.scope: Deactivated successfully.
Oct 01 13:11:11 compute-0 systemd[1]: libpod-818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164.scope: Consumed 1.064s CPU time.
Oct 01 13:11:11 compute-0 podman[103544]: 2025-10-01 13:11:11.294990764 +0000 UTC m=+1.333310398 container died 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc379d766997e9264fdc3cbe14f0e8c9fcfb314b128c867d4c51df28295b0251-merged.mount: Deactivated successfully.
Oct 01 13:11:11 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct 01 13:11:11 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct 01 13:11:11 compute-0 podman[103544]: 2025-10-01 13:11:11.371885757 +0000 UTC m=+1.410205371 container remove 818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_noether, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:11:11 compute-0 systemd[1]: libpod-conmon-818106debff665c6eb223c0ad5b07edc6c34e73ad8e517b3a2e557795413f164.scope: Deactivated successfully.
Oct 01 13:11:11 compute-0 sudo[103439]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:11 compute-0 sudo[103605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:11 compute-0 sudo[103605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:11 compute-0 sudo[103605]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:11 compute-0 sudo[103630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:11:11 compute-0 sudo[103630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:11 compute-0 sudo[103630]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:11 compute-0 sudo[103655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:11 compute-0 sudo[103655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:11 compute-0 sudo[103655]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v114: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 01 13:11:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 01 13:11:11 compute-0 sudo[103680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:11:11 compute-0 sudo[103680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct 01 13:11:12 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 01 13:11:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct 01 13:11:12 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct 01 13:11:12 compute-0 ceph-mon[74802]: osdmap e51: 3 total, 3 up, 3 in
Oct 01 13:11:12 compute-0 ceph-mon[74802]: 4.c deep-scrub starts
Oct 01 13:11:12 compute-0 ceph-mon[74802]: 4.c deep-scrub ok
Oct 01 13:11:12 compute-0 ceph-mon[74802]: 3.b scrub starts
Oct 01 13:11:12 compute-0 ceph-mon[74802]: 3.b scrub ok
Oct 01 13:11:12 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 01 13:11:12 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.7( v 41'2 (0'0,41'2] local-lis/les=48/49 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.915916443s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 41'2 active pruub 84.197708130s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:12 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=48/49 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.906821251s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'3 mlcod 41'3 active pruub 84.188560486s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:12 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.7( v 41'2 (0'0,41'2] local-lis/les=48/49 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.915840149s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 0'0 unknown NOTIFY pruub 84.197708130s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:12 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=48/49 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.906641006s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 84.188560486s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:12 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.3( v 41'2 (0'0,41'2] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.908492088s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 41'2 active pruub 84.190483093s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:12 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.3( v 41'2 (0'0,41'2] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.908445358s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 0'0 unknown NOTIFY pruub 84.190483093s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:12 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=48/49 n=3 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.904949188s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'5 mlcod 41'5 active pruub 84.187118530s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:12 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 52 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=48/49 n=3 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.904909134s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=41'5 mlcod 0'0 unknown NOTIFY pruub 84.187118530s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:12 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:12 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:12 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:12 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:12 compute-0 podman[103743]: 2025-10-01 13:11:12.17305461 +0000 UTC m=+0.040741652 container create a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:11:12 compute-0 systemd[1]: Started libpod-conmon-a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188.scope.
Oct 01 13:11:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:12 compute-0 podman[103743]: 2025-10-01 13:11:12.155199586 +0000 UTC m=+0.022886628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:12 compute-0 podman[103743]: 2025-10-01 13:11:12.261844596 +0000 UTC m=+0.129531648 container init a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:11:12 compute-0 podman[103743]: 2025-10-01 13:11:12.274259444 +0000 UTC m=+0.141946466 container start a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:11:12 compute-0 podman[103743]: 2025-10-01 13:11:12.278004048 +0000 UTC m=+0.145691090 container attach a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 13:11:12 compute-0 admiring_hodgkin[103759]: 167 167
Oct 01 13:11:12 compute-0 systemd[1]: libpod-a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188.scope: Deactivated successfully.
Oct 01 13:11:12 compute-0 podman[103743]: 2025-10-01 13:11:12.281965618 +0000 UTC m=+0.149652660 container died a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a90215f681a310f88ea1a9ea1cebb5f60ad0f4e9324217cbedc670cebaa20ae1-merged.mount: Deactivated successfully.
Oct 01 13:11:12 compute-0 podman[103743]: 2025-10-01 13:11:12.325135354 +0000 UTC m=+0.192822386 container remove a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:11:12 compute-0 systemd[1]: libpod-conmon-a3a83a2c169b740bad7b8aabbea05051dc6a2a2bf0f02e243559531d19e23188.scope: Deactivated successfully.
Oct 01 13:11:12 compute-0 podman[103784]: 2025-10-01 13:11:12.496824066 +0000 UTC m=+0.065263490 container create 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:11:12 compute-0 systemd[1]: Started libpod-conmon-93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1.scope.
Oct 01 13:11:12 compute-0 podman[103784]: 2025-10-01 13:11:12.474023181 +0000 UTC m=+0.042462595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:12 compute-0 podman[103784]: 2025-10-01 13:11:12.61346886 +0000 UTC m=+0.181908284 container init 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 13:11:12 compute-0 podman[103784]: 2025-10-01 13:11:12.624469525 +0000 UTC m=+0.192908929 container start 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:11:12 compute-0 podman[103784]: 2025-10-01 13:11:12.627664223 +0000 UTC m=+0.196103667 container attach 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:11:12 compute-0 ceph-mgr[75103]: [progress INFO root] Writing back 12 completed events
Oct 01 13:11:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 13:11:12 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct 01 13:11:13 compute-0 ceph-mon[74802]: pgmap v114: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 01 13:11:13 compute-0 ceph-mon[74802]: osdmap e52: 3 total, 3 up, 3 in
Oct 01 13:11:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct 01 13:11:13 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct 01 13:11:13 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct 01 13:11:13 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 53 pg[6.f( v 41'5 lc 41'1 (0'0,41'5] local-lis/les=52/53 n=3 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=41'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:13 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 53 pg[6.3( v 41'2 lc 0'0 (0'0,41'2] local-lis/les=52/53 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=41'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:13 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 53 pg[6.b( v 41'3 lc 0'0 (0'0,41'3] local-lis/les=52/53 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=41'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:13 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 53 pg[6.7( v 41'2 lc 41'1 (0'0,41'2] local-lis/les=52/53 n=1 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=41'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:13 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]: {
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:     "0": [
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:         {
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "devices": [
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "/dev/loop3"
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             ],
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_name": "ceph_lv0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_size": "21470642176",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "name": "ceph_lv0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "tags": {
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.cluster_name": "ceph",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.crush_device_class": "",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.encrypted": "0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.osd_id": "0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.type": "block",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.vdo": "0"
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             },
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "type": "block",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "vg_name": "ceph_vg0"
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:         }
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:     ],
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:     "1": [
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:         {
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "devices": [
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "/dev/loop4"
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             ],
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_name": "ceph_lv1",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_size": "21470642176",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "name": "ceph_lv1",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "tags": {
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.cluster_name": "ceph",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.crush_device_class": "",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.encrypted": "0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.osd_id": "1",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.type": "block",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.vdo": "0"
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             },
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "type": "block",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "vg_name": "ceph_vg1"
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:         }
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:     ],
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:     "2": [
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:         {
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "devices": [
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "/dev/loop5"
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             ],
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_name": "ceph_lv2",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_size": "21470642176",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "name": "ceph_lv2",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "tags": {
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.cluster_name": "ceph",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.crush_device_class": "",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.encrypted": "0",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.osd_id": "2",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.type": "block",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:                 "ceph.vdo": "0"
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             },
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "type": "block",
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:             "vg_name": "ceph_vg2"
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:         }
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]:     ]
Oct 01 13:11:13 compute-0 quirky_archimedes[103801]: }
Oct 01 13:11:13 compute-0 podman[103784]: 2025-10-01 13:11:13.41356267 +0000 UTC m=+0.982002094 container died 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 01 13:11:13 compute-0 systemd[1]: libpod-93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1.scope: Deactivated successfully.
Oct 01 13:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30da8a6188c7f22997abf608ea7657f938fb9f5c59c1c2c2040a85918b0ec76-merged.mount: Deactivated successfully.
Oct 01 13:11:13 compute-0 podman[103784]: 2025-10-01 13:11:13.49464069 +0000 UTC m=+1.063080114 container remove 93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:11:13 compute-0 systemd[1]: libpod-conmon-93e31a187f118886e3e03eab4cb0e07f8bb22277a2044fef4853971019db53c1.scope: Deactivated successfully.
Oct 01 13:11:13 compute-0 sudo[103680]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:13 compute-0 sudo[103825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:13 compute-0 sudo[103825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:13 compute-0 sudo[103825]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:13 compute-0 sudo[103850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:11:13 compute-0 sudo[103850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:13 compute-0 sudo[103850]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v117: 181 pgs: 4 peering, 177 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 199 B/s, 2 keys/s, 3 objects/s recovering
Oct 01 13:11:13 compute-0 sudo[103875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:13 compute-0 sudo[103875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:13 compute-0 sudo[103875]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:13 compute-0 sudo[103900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:11:13 compute-0 sudo[103900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:14 compute-0 ceph-mon[74802]: osdmap e53: 3 total, 3 up, 3 in
Oct 01 13:11:14 compute-0 ceph-mon[74802]: 2.e scrub starts
Oct 01 13:11:14 compute-0 ceph-mon[74802]: 2.e scrub ok
Oct 01 13:11:14 compute-0 podman[103967]: 2025-10-01 13:11:14.357048618 +0000 UTC m=+0.058505523 container create 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:11:14 compute-0 systemd[1]: Started libpod-conmon-55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689.scope.
Oct 01 13:11:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:14 compute-0 podman[103967]: 2025-10-01 13:11:14.337460892 +0000 UTC m=+0.038917787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:14 compute-0 podman[103967]: 2025-10-01 13:11:14.445461103 +0000 UTC m=+0.146918018 container init 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 13:11:14 compute-0 podman[103967]: 2025-10-01 13:11:14.452776086 +0000 UTC m=+0.154232961 container start 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:11:14 compute-0 podman[103967]: 2025-10-01 13:11:14.455895611 +0000 UTC m=+0.157352536 container attach 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:11:14 compute-0 laughing_pascal[103983]: 167 167
Oct 01 13:11:14 compute-0 systemd[1]: libpod-55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689.scope: Deactivated successfully.
Oct 01 13:11:14 compute-0 conmon[103983]: conmon 55a4ac52b654192137c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689.scope/container/memory.events
Oct 01 13:11:14 compute-0 podman[103988]: 2025-10-01 13:11:14.523222982 +0000 UTC m=+0.045166467 container died 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d073172abde0bcbb35de0b6dfc831f3a4e0fd7f5f97e48031c04ecf1742a483-merged.mount: Deactivated successfully.
Oct 01 13:11:14 compute-0 podman[103988]: 2025-10-01 13:11:14.574804263 +0000 UTC m=+0.096747698 container remove 55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_pascal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:11:14 compute-0 systemd[1]: libpod-conmon-55a4ac52b654192137c0ab96bfb60485763b3708a9c68adefd07f5494aac6689.scope: Deactivated successfully.
Oct 01 13:11:14 compute-0 podman[104009]: 2025-10-01 13:11:14.822257494 +0000 UTC m=+0.066036343 container create fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 13:11:14 compute-0 systemd[1]: Started libpod-conmon-fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd.scope.
Oct 01 13:11:14 compute-0 podman[104009]: 2025-10-01 13:11:14.797352675 +0000 UTC m=+0.041131534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:11:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:14 compute-0 podman[104009]: 2025-10-01 13:11:14.919120845 +0000 UTC m=+0.162899754 container init fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:14 compute-0 podman[104009]: 2025-10-01 13:11:14.932730661 +0000 UTC m=+0.176509510 container start fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:11:14 compute-0 podman[104009]: 2025-10-01 13:11:14.942021653 +0000 UTC m=+0.185800502 container attach fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:11:15 compute-0 ceph-mon[74802]: pgmap v117: 181 pgs: 4 peering, 177 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 199 B/s, 2 keys/s, 3 objects/s recovering
Oct 01 13:11:15 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 01 13:11:15 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 01 13:11:15 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 01 13:11:15 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 01 13:11:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v118: 181 pgs: 4 peering, 177 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 139 B/s, 1 keys/s, 2 objects/s recovering
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]: {
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "osd_id": 0,
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "type": "bluestore"
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:     },
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "osd_id": 2,
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "type": "bluestore"
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:     },
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "osd_id": 1,
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:         "type": "bluestore"
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]:     }
Oct 01 13:11:15 compute-0 upbeat_northcutt[104026]: }
Oct 01 13:11:15 compute-0 systemd[1]: libpod-fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd.scope: Deactivated successfully.
Oct 01 13:11:15 compute-0 systemd[1]: libpod-fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd.scope: Consumed 1.034s CPU time.
Oct 01 13:11:15 compute-0 podman[104009]: 2025-10-01 13:11:15.965349956 +0000 UTC m=+1.209128825 container died fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 01 13:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-92994c209488df1e9749e000421777294e7e24b6d0dc01df3e4c0908922b8e88-merged.mount: Deactivated successfully.
Oct 01 13:11:16 compute-0 podman[104009]: 2025-10-01 13:11:16.054953586 +0000 UTC m=+1.298732425 container remove fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:11:16 compute-0 systemd[1]: libpod-conmon-fc803cdfa7f67426125bda5ac62eda6d15cda63e95d080bd0561d0e4fb1f98dd.scope: Deactivated successfully.
Oct 01 13:11:16 compute-0 ceph-mon[74802]: 4.15 scrub starts
Oct 01 13:11:16 compute-0 ceph-mon[74802]: 4.15 scrub ok
Oct 01 13:11:16 compute-0 ceph-mon[74802]: 3.d scrub starts
Oct 01 13:11:16 compute-0 ceph-mon[74802]: 3.d scrub ok
Oct 01 13:11:16 compute-0 sudo[103900]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:11:16 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:11:16 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:16 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev d36144a5-4421-46cb-b1e6-efcbd3d256dc does not exist
Oct 01 13:11:16 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 67c471be-7b80-4f27-82c8-d1e1187df4ce does not exist
Oct 01 13:11:16 compute-0 sudo[104071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:11:16 compute-0 sudo[104071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:16 compute-0 sudo[104071]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:16 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct 01 13:11:16 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct 01 13:11:16 compute-0 sudo[104096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:11:16 compute-0 sudo[104096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:11:16 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 01 13:11:16 compute-0 sudo[104096]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:16 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 01 13:11:17 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct 01 13:11:17 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct 01 13:11:17 compute-0 ceph-mon[74802]: pgmap v118: 181 pgs: 4 peering, 177 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 139 B/s, 1 keys/s, 2 objects/s recovering
Oct 01 13:11:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:17 compute-0 ceph-mon[74802]: 4.16 scrub starts
Oct 01 13:11:17 compute-0 ceph-mon[74802]: 4.16 scrub ok
Oct 01 13:11:17 compute-0 ceph-mon[74802]: 3.10 scrub starts
Oct 01 13:11:17 compute-0 ceph-mon[74802]: 3.10 scrub ok
Oct 01 13:11:17 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct 01 13:11:17 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct 01 13:11:17 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 01 13:11:17 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 01 13:11:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v119: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 213 B/s, 2 keys/s, 2 objects/s recovering
Oct 01 13:11:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 01 13:11:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 01 13:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:11:18 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct 01 13:11:18 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct 01 13:11:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct 01 13:11:18 compute-0 ceph-mon[74802]: 2.10 scrub starts
Oct 01 13:11:18 compute-0 ceph-mon[74802]: 2.10 scrub ok
Oct 01 13:11:18 compute-0 ceph-mon[74802]: 4.17 scrub starts
Oct 01 13:11:18 compute-0 ceph-mon[74802]: 4.17 scrub ok
Oct 01 13:11:18 compute-0 ceph-mon[74802]: 3.13 scrub starts
Oct 01 13:11:18 compute-0 ceph-mon[74802]: 3.13 scrub ok
Oct 01 13:11:18 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 01 13:11:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 01 13:11:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct 01 13:11:18 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct 01 13:11:18 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 01 13:11:18 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 01 13:11:19 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 54 pg[6.c( v 41'2 (0'0,41'2] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54 pruub=9.630481720s) [1] r=-1 lpr=54 pi=[37,54)/1 crt=41'2 lcod 41'1 mlcod 41'1 active pruub 92.860504150s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:19 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 54 pg[6.c( v 41'2 (0'0,41'2] local-lis/les=37/39 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54 pruub=9.630410194s) [1] r=-1 lpr=54 pi=[37,54)/1 crt=41'2 lcod 41'1 mlcod 0'0 unknown NOTIFY pruub 92.860504150s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:19 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 54 pg[6.4( v 41'6 (0'0,41'6] local-lis/les=37/39 n=4 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54 pruub=9.624657631s) [1] r=-1 lpr=54 pi=[37,54)/1 crt=41'6 lcod 41'5 mlcod 41'5 active pruub 92.855270386s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:19 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 54 pg[6.4( v 41'6 (0'0,41'6] local-lis/les=37/39 n=4 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54 pruub=9.624556541s) [1] r=-1 lpr=54 pi=[37,54)/1 crt=41'6 lcod 41'5 mlcod 0'0 unknown NOTIFY pruub 92.855270386s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:19 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 54 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[37,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:19 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 54 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[37,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct 01 13:11:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct 01 13:11:19 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct 01 13:11:19 compute-0 ceph-mon[74802]: pgmap v119: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 213 B/s, 2 keys/s, 2 objects/s recovering
Oct 01 13:11:19 compute-0 ceph-mon[74802]: 2.12 scrub starts
Oct 01 13:11:19 compute-0 ceph-mon[74802]: 2.12 scrub ok
Oct 01 13:11:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 01 13:11:19 compute-0 ceph-mon[74802]: osdmap e54: 3 total, 3 up, 3 in
Oct 01 13:11:19 compute-0 ceph-mon[74802]: 4.19 scrub starts
Oct 01 13:11:19 compute-0 ceph-mon[74802]: 4.19 scrub ok
Oct 01 13:11:19 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 55 pg[6.4( v 41'6 lc 41'1 (0'0,41'6] local-lis/les=54/55 n=4 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[37,54)/1 crt=41'6 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:19 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 55 pg[6.c( v 41'2 lc 41'1 (0'0,41'2] local-lis/les=54/55 n=1 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=54) [1] r=0 lpr=54 pi=[37,54)/1 crt=41'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:19 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Oct 01 13:11:19 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Oct 01 13:11:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v122: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 95 B/s, 1 keys/s, 1 objects/s recovering
Oct 01 13:11:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 01 13:11:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 01 13:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct 01 13:11:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 01 13:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct 01 13:11:20 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct 01 13:11:20 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct 01 13:11:20 compute-0 ceph-mon[74802]: osdmap e55: 3 total, 3 up, 3 in
Oct 01 13:11:20 compute-0 ceph-mon[74802]: 4.1d deep-scrub starts
Oct 01 13:11:20 compute-0 ceph-mon[74802]: 4.1d deep-scrub ok
Oct 01 13:11:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 01 13:11:20 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct 01 13:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:21 compute-0 ceph-mon[74802]: pgmap v122: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 95 B/s, 1 keys/s, 1 objects/s recovering
Oct 01 13:11:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 01 13:11:21 compute-0 ceph-mon[74802]: 4.1e scrub starts
Oct 01 13:11:21 compute-0 ceph-mon[74802]: osdmap e56: 3 total, 3 up, 3 in
Oct 01 13:11:21 compute-0 ceph-mon[74802]: 4.1e scrub ok
Oct 01 13:11:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v124: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 1 keys/s, 1 objects/s recovering
Oct 01 13:11:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 01 13:11:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 01 13:11:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct 01 13:11:22 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 01 13:11:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct 01 13:11:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct 01 13:11:22 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 01 13:11:22 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 56 pg[6.5( v 41'3 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=10.446735382s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=41'3 mlcod 41'3 active pruub 92.188621521s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:22 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 56 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=10.444850922s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=41'3 mlcod 41'3 active pruub 92.187164307s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:22 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 57 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=10.444769859s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 92.187164307s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:22 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 57 pg[6.5( v 41'3 (0'0,41'3] local-lis/les=48/49 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=10.445859909s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 92.188621521s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:22 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=57 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:22 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=57 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:23 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 01 13:11:23 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 01 13:11:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct 01 13:11:23 compute-0 ceph-mon[74802]: pgmap v124: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 1 keys/s, 1 objects/s recovering
Oct 01 13:11:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 01 13:11:23 compute-0 ceph-mon[74802]: osdmap e57: 3 total, 3 up, 3 in
Oct 01 13:11:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct 01 13:11:23 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct 01 13:11:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 58 pg[6.d( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=56/58 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=57 pi=[48,56)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 58 pg[6.5( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=56/58 n=2 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=56) [0] r=0 lpr=57 pi=[48,56)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v127: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 343 B/s, 1 objects/s recovering
Oct 01 13:11:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 01 13:11:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 01 13:11:24 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 01 13:11:24 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 01 13:11:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct 01 13:11:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 01 13:11:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct 01 13:11:24 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct 01 13:11:24 compute-0 ceph-mon[74802]: 4.1f scrub starts
Oct 01 13:11:24 compute-0 ceph-mon[74802]: 4.1f scrub ok
Oct 01 13:11:24 compute-0 ceph-mon[74802]: osdmap e58: 3 total, 3 up, 3 in
Oct 01 13:11:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 01 13:11:25 compute-0 ceph-mon[74802]: pgmap v127: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 343 B/s, 1 objects/s recovering
Oct 01 13:11:25 compute-0 ceph-mon[74802]: 3.14 scrub starts
Oct 01 13:11:25 compute-0 ceph-mon[74802]: 3.14 scrub ok
Oct 01 13:11:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 01 13:11:25 compute-0 ceph-mon[74802]: osdmap e59: 3 total, 3 up, 3 in
Oct 01 13:11:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v129: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 282 B/s, 0 objects/s recovering
Oct 01 13:11:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 01 13:11:25 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 01 13:11:26 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 01 13:11:26 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 01 13:11:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct 01 13:11:26 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 01 13:11:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 01 13:11:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct 01 13:11:26 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct 01 13:11:27 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 01 13:11:27 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 01 13:11:27 compute-0 ceph-mon[74802]: pgmap v129: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 282 B/s, 0 objects/s recovering
Oct 01 13:11:27 compute-0 ceph-mon[74802]: 6.8 scrub starts
Oct 01 13:11:27 compute-0 ceph-mon[74802]: 6.8 scrub ok
Oct 01 13:11:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 01 13:11:27 compute-0 ceph-mon[74802]: osdmap e60: 3 total, 3 up, 3 in
Oct 01 13:11:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v131: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Oct 01 13:11:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 01 13:11:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 01 13:11:27 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 60 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=60 pruub=8.919249535s) [2] r=-1 lpr=60 pi=[37,60)/1 crt=0'0 mlcod 0'0 active pruub 100.855567932s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:27 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 60 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=60 pruub=8.919156075s) [2] r=-1 lpr=60 pi=[37,60)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.855567932s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 60 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=60) [2] r=0 lpr=60 pi=[37,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:28 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 01 13:11:28 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 01 13:11:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct 01 13:11:28 compute-0 ceph-mon[74802]: 3.1b scrub starts
Oct 01 13:11:28 compute-0 ceph-mon[74802]: 3.1b scrub ok
Oct 01 13:11:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 01 13:11:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 01 13:11:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct 01 13:11:28 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct 01 13:11:28 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 61 pg[6.9( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=61 pruub=12.707841873s) [0] r=-1 lpr=61 pi=[48,61)/1 crt=0'0 mlcod 0'0 active pruub 100.188720703s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:28 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 61 pg[6.9( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=61 pruub=12.707711220s) [0] r=-1 lpr=61 pi=[48,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.188720703s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:28 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 61 pg[6.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:28 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 61 pg[6.8( empty local-lis/les=60/61 n=0 ec=37/20 lis/c=37/37 les/c/f=39/39/0 sis=60) [2] r=0 lpr=60 pi=[37,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:28 compute-0 sshd-session[104121]: Received disconnect from 80.253.31.232 port 41264:11: Bye Bye [preauth]
Oct 01 13:11:28 compute-0 sshd-session[104121]: Disconnected from authenticating user root 80.253.31.232 port 41264 [preauth]
Oct 01 13:11:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct 01 13:11:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct 01 13:11:29 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct 01 13:11:29 compute-0 ceph-mon[74802]: pgmap v131: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Oct 01 13:11:29 compute-0 ceph-mon[74802]: 2.14 scrub starts
Oct 01 13:11:29 compute-0 ceph-mon[74802]: 2.14 scrub ok
Oct 01 13:11:29 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 01 13:11:29 compute-0 ceph-mon[74802]: osdmap e61: 3 total, 3 up, 3 in
Oct 01 13:11:29 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 62 pg[6.9( empty local-lis/les=61/62 n=0 ec=37/20 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v134: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Oct 01 13:11:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 01 13:11:29 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 01 13:11:30 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 01 13:11:30 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 01 13:11:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct 01 13:11:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 01 13:11:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct 01 13:11:30 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct 01 13:11:30 compute-0 ceph-mon[74802]: osdmap e62: 3 total, 3 up, 3 in
Oct 01 13:11:30 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 01 13:11:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:31 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 01 13:11:31 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 01 13:11:31 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 01 13:11:31 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 01 13:11:31 compute-0 ceph-mon[74802]: pgmap v134: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Oct 01 13:11:31 compute-0 ceph-mon[74802]: 3.19 scrub starts
Oct 01 13:11:31 compute-0 ceph-mon[74802]: 3.19 scrub ok
Oct 01 13:11:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 01 13:11:31 compute-0 ceph-mon[74802]: osdmap e63: 3 total, 3 up, 3 in
Oct 01 13:11:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v136: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Oct 01 13:11:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 01 13:11:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 01 13:11:31 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 63 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=50/51 n=0 ec=37/20 lis/c=50/50 les/c/f=51/51/0 sis=63 pruub=11.162081718s) [0] r=-1 lpr=63 pi=[50,63)/1 crt=41'1 lcod 0'0 mlcod 0'0 active pruub 102.199279785s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:31 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 63 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=50/51 n=0 ec=37/20 lis/c=50/50 les/c/f=51/51/0 sis=63 pruub=11.161931992s) [0] r=-1 lpr=63 pi=[50,63)/1 crt=41'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 102.199279785s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:31 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 63 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=50/50 les/c/f=51/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:32 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 01 13:11:32 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 01 13:11:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct 01 13:11:32 compute-0 ceph-mon[74802]: 3.1a scrub starts
Oct 01 13:11:32 compute-0 ceph-mon[74802]: 3.1a scrub ok
Oct 01 13:11:32 compute-0 ceph-mon[74802]: 7.1f scrub starts
Oct 01 13:11:32 compute-0 ceph-mon[74802]: 7.1f scrub ok
Oct 01 13:11:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 01 13:11:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 01 13:11:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct 01 13:11:32 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct 01 13:11:32 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 64 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=52/53 n=1 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=12.700857162s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=41'3 mlcod 41'3 active pruub 109.230659485s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:32 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 64 pg[6.b( v 41'3 (0'0,41'3] local-lis/les=52/53 n=1 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=12.700545311s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 109.230659485s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:32 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 64 pg[6.a( v 41'1 (0'0,41'1] local-lis/les=63/64 n=0 ec=37/20 lis/c=50/50 les/c/f=51/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 crt=41'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:32 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 64 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:33 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 01 13:11:33 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 01 13:11:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct 01 13:11:33 compute-0 ceph-mon[74802]: pgmap v136: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 0 objects/s recovering
Oct 01 13:11:33 compute-0 ceph-mon[74802]: 2.1a scrub starts
Oct 01 13:11:33 compute-0 ceph-mon[74802]: 2.1a scrub ok
Oct 01 13:11:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 01 13:11:33 compute-0 ceph-mon[74802]: osdmap e64: 3 total, 3 up, 3 in
Oct 01 13:11:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct 01 13:11:33 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct 01 13:11:33 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 65 pg[6.b( v 41'3 lc 0'0 (0'0,41'3] local-lis/les=64/65 n=1 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=41'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v139: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 01 13:11:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 01 13:11:34 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Oct 01 13:11:34 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Oct 01 13:11:34 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 01 13:11:34 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 01 13:11:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct 01 13:11:34 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 01 13:11:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct 01 13:11:34 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct 01 13:11:34 compute-0 ceph-mon[74802]: 2.1e scrub starts
Oct 01 13:11:34 compute-0 ceph-mon[74802]: 2.1e scrub ok
Oct 01 13:11:34 compute-0 ceph-mon[74802]: osdmap e65: 3 total, 3 up, 3 in
Oct 01 13:11:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 01 13:11:35 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 01 13:11:35 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 01 13:11:35 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct 01 13:11:35 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct 01 13:11:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:35 compute-0 ceph-mon[74802]: pgmap v139: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:35 compute-0 ceph-mon[74802]: 3.1c deep-scrub starts
Oct 01 13:11:35 compute-0 ceph-mon[74802]: 3.1c deep-scrub ok
Oct 01 13:11:35 compute-0 ceph-mon[74802]: 3.f scrub starts
Oct 01 13:11:35 compute-0 ceph-mon[74802]: 3.f scrub ok
Oct 01 13:11:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 01 13:11:35 compute-0 ceph-mon[74802]: osdmap e66: 3 total, 3 up, 3 in
Oct 01 13:11:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v141: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 01 13:11:35 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 01 13:11:36 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct 01 13:11:36 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct 01 13:11:36 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 01 13:11:36 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 01 13:11:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct 01 13:11:36 compute-0 ceph-mon[74802]: 7.7 scrub starts
Oct 01 13:11:36 compute-0 ceph-mon[74802]: 7.7 scrub ok
Oct 01 13:11:36 compute-0 ceph-mon[74802]: 7.4 scrub starts
Oct 01 13:11:36 compute-0 ceph-mon[74802]: 7.4 scrub ok
Oct 01 13:11:36 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 01 13:11:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 01 13:11:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct 01 13:11:36 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct 01 13:11:37 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct 01 13:11:37 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct 01 13:11:37 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct 01 13:11:37 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct 01 13:11:37 compute-0 sshd-session[104123]: Received disconnect from 27.254.137.144 port 58720:11: Bye Bye [preauth]
Oct 01 13:11:37 compute-0 sshd-session[104123]: Disconnected from authenticating user root 27.254.137.144 port 58720 [preauth]
Oct 01 13:11:37 compute-0 ceph-mon[74802]: pgmap v141: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:37 compute-0 ceph-mon[74802]: 5.6 scrub starts
Oct 01 13:11:37 compute-0 ceph-mon[74802]: 5.6 scrub ok
Oct 01 13:11:37 compute-0 ceph-mon[74802]: 3.c scrub starts
Oct 01 13:11:37 compute-0 ceph-mon[74802]: 3.c scrub ok
Oct 01 13:11:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 01 13:11:37 compute-0 ceph-mon[74802]: osdmap e67: 3 total, 3 up, 3 in
Oct 01 13:11:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v143: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:11:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 01 13:11:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 01 13:11:37 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 67 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=56/58 n=2 ec=37/20 lis/c=56/56 les/c/f=58/58/0 sis=67 pruub=9.383437157s) [1] r=-1 lpr=67 pi=[56,67)/1 crt=41'3 mlcod 41'3 active pruub 111.416503906s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:37 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 67 pg[6.d( v 41'3 (0'0,41'3] local-lis/les=56/58 n=2 ec=37/20 lis/c=56/56 les/c/f=58/58/0 sis=67 pruub=9.383358002s) [1] r=-1 lpr=67 pi=[56,67)/1 crt=41'3 mlcod 0'0 unknown NOTIFY pruub 111.416503906s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:37 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 67 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=56/56 les/c/f=58/58/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:38 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Oct 01 13:11:38 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Oct 01 13:11:38 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 01 13:11:38 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 01 13:11:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct 01 13:11:38 compute-0 ceph-mon[74802]: 5.8 scrub starts
Oct 01 13:11:38 compute-0 ceph-mon[74802]: 5.8 scrub ok
Oct 01 13:11:38 compute-0 ceph-mon[74802]: 7.b scrub starts
Oct 01 13:11:38 compute-0 ceph-mon[74802]: 7.b scrub ok
Oct 01 13:11:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 01 13:11:38 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 01 13:11:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct 01 13:11:38 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct 01 13:11:38 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 68 pg[6.d( v 41'3 lc 41'1 (0'0,41'3] local-lis/les=67/68 n=2 ec=37/20 lis/c=56/56 les/c/f=58/58/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/1 crt=41'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:39 compute-0 ceph-mon[74802]: pgmap v143: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:11:39 compute-0 ceph-mon[74802]: 5.a deep-scrub starts
Oct 01 13:11:39 compute-0 ceph-mon[74802]: 5.a deep-scrub ok
Oct 01 13:11:39 compute-0 ceph-mon[74802]: 7.d scrub starts
Oct 01 13:11:39 compute-0 ceph-mon[74802]: 7.d scrub ok
Oct 01 13:11:39 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 01 13:11:39 compute-0 ceph-mon[74802]: osdmap e68: 3 total, 3 up, 3 in
Oct 01 13:11:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v145: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:11:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 01 13:11:39 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 01 13:11:40 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct 01 13:11:40 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct 01 13:11:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct 01 13:11:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 01 13:11:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 01 13:11:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct 01 13:11:40 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct 01 13:11:40 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 69 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=52/53 n=3 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=69 pruub=12.476483345s) [2] r=-1 lpr=69 pi=[52,69)/1 crt=41'5 mlcod 41'5 active pruub 117.227127075s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:40 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 69 pg[6.f( v 41'5 (0'0,41'5] local-lis/les=52/53 n=3 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=69 pruub=12.476371765s) [2] r=-1 lpr=69 pi=[52,69)/1 crt=41'5 mlcod 0'0 unknown NOTIFY pruub 117.227127075s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:11:40 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 69 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=69) [2] r=0 lpr=69 pi=[52,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:41 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct 01 13:11:41 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct 01 13:11:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct 01 13:11:41 compute-0 ceph-mon[74802]: pgmap v145: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:11:41 compute-0 ceph-mon[74802]: 5.b scrub starts
Oct 01 13:11:41 compute-0 ceph-mon[74802]: 5.b scrub ok
Oct 01 13:11:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 01 13:11:41 compute-0 ceph-mon[74802]: osdmap e69: 3 total, 3 up, 3 in
Oct 01 13:11:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct 01 13:11:41 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct 01 13:11:41 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 70 pg[6.f( v 41'5 lc 41'1 (0'0,41'5] local-lis/les=69/70 n=3 ec=37/20 lis/c=52/52 les/c/f=53/53/0 sis=69) [2] r=0 lpr=69 pi=[52,69)/1 crt=41'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v148: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:11:42 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 01 13:11:42 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 01 13:11:42 compute-0 ceph-mon[74802]: 5.1e scrub starts
Oct 01 13:11:42 compute-0 ceph-mon[74802]: 5.1e scrub ok
Oct 01 13:11:42 compute-0 ceph-mon[74802]: osdmap e70: 3 total, 3 up, 3 in
Oct 01 13:11:43 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct 01 13:11:43 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct 01 13:11:43 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct 01 13:11:43 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct 01 13:11:43 compute-0 ceph-mon[74802]: pgmap v148: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:11:43 compute-0 ceph-mon[74802]: 5.d scrub starts
Oct 01 13:11:43 compute-0 ceph-mon[74802]: 5.d scrub ok
Oct 01 13:11:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v149: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 0 objects/s recovering
Oct 01 13:11:44 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 01 13:11:44 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 01 13:11:44 compute-0 ceph-mon[74802]: 7.10 scrub starts
Oct 01 13:11:44 compute-0 ceph-mon[74802]: 2.19 scrub starts
Oct 01 13:11:44 compute-0 ceph-mon[74802]: 7.10 scrub ok
Oct 01 13:11:44 compute-0 ceph-mon[74802]: 2.19 scrub ok
Oct 01 13:11:45 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 01 13:11:45 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 01 13:11:45 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 01 13:11:45 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 01 13:11:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:45 compute-0 ceph-mon[74802]: pgmap v149: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 112 B/s, 0 objects/s recovering
Oct 01 13:11:45 compute-0 ceph-mon[74802]: 2.16 scrub starts
Oct 01 13:11:45 compute-0 ceph-mon[74802]: 2.16 scrub ok
Oct 01 13:11:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v150: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 0 objects/s recovering
Oct 01 13:11:46 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct 01 13:11:46 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct 01 13:11:46 compute-0 ceph-mon[74802]: 5.e scrub starts
Oct 01 13:11:46 compute-0 ceph-mon[74802]: 5.e scrub ok
Oct 01 13:11:46 compute-0 ceph-mon[74802]: 7.12 scrub starts
Oct 01 13:11:46 compute-0 ceph-mon[74802]: 7.12 scrub ok
Oct 01 13:11:46 compute-0 ceph-mon[74802]: pgmap v150: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 0 objects/s recovering
Oct 01 13:11:47 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 01 13:11:47 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 01 13:11:47 compute-0 ceph-mon[74802]: 2.18 scrub starts
Oct 01 13:11:47 compute-0 ceph-mon[74802]: 2.18 scrub ok
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:11:47
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v151: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 84 B/s, 0 objects/s recovering
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:11:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:11:48 compute-0 ceph-mon[74802]: 7.14 scrub starts
Oct 01 13:11:48 compute-0 ceph-mon[74802]: 7.14 scrub ok
Oct 01 13:11:48 compute-0 ceph-mon[74802]: pgmap v151: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 84 B/s, 0 objects/s recovering
Oct 01 13:11:49 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Oct 01 13:11:49 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Oct 01 13:11:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v152: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 0 objects/s recovering
Oct 01 13:11:50 compute-0 sudo[104148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lszrxkcrhyiylfcwfsfxpxssraxgqche ; /usr/bin/python3'
Oct 01 13:11:50 compute-0 sudo[104148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:11:50 compute-0 python3[104150]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:11:50 compute-0 podman[104151]: 2025-10-01 13:11:50.369635911 +0000 UTC m=+0.057176630 container create bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:11:50 compute-0 systemd[1]: Started libpod-conmon-bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1.scope.
Oct 01 13:11:50 compute-0 podman[104151]: 2025-10-01 13:11:50.339870339 +0000 UTC m=+0.027411108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:11:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81df86771aff73a1b18386ebbbeae98e3b99d453f1edfde32f853c1e770c7a21/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81df86771aff73a1b18386ebbbeae98e3b99d453f1edfde32f853c1e770c7a21/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:50 compute-0 podman[104151]: 2025-10-01 13:11:50.479518506 +0000 UTC m=+0.167059245 container init bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:11:50 compute-0 podman[104151]: 2025-10-01 13:11:50.48994439 +0000 UTC m=+0.177485069 container start bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 13:11:50 compute-0 podman[104151]: 2025-10-01 13:11:50.49339405 +0000 UTC m=+0.180934809 container attach bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:11:50 compute-0 dazzling_austin[104166]: could not fetch user info: no user info saved
Oct 01 13:11:50 compute-0 systemd[1]: libpod-bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1.scope: Deactivated successfully.
Oct 01 13:11:50 compute-0 podman[104251]: 2025-10-01 13:11:50.780181375 +0000 UTC m=+0.031817639 container died bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 13:11:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-81df86771aff73a1b18386ebbbeae98e3b99d453f1edfde32f853c1e770c7a21-merged.mount: Deactivated successfully.
Oct 01 13:11:50 compute-0 ceph-mon[74802]: 2.13 deep-scrub starts
Oct 01 13:11:50 compute-0 ceph-mon[74802]: 2.13 deep-scrub ok
Oct 01 13:11:50 compute-0 ceph-mon[74802]: pgmap v152: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 0 objects/s recovering
Oct 01 13:11:50 compute-0 podman[104251]: 2025-10-01 13:11:50.833429609 +0000 UTC m=+0.085065823 container remove bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1 (image=quay.io/ceph/ceph:v18, name=dazzling_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:11:50 compute-0 systemd[1]: libpod-conmon-bd996ac397002cd68b94d36255efc51d38ba555e6825a37328f3966a3a3566f1.scope: Deactivated successfully.
Oct 01 13:11:50 compute-0 sudo[104148]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:51 compute-0 sudo[104289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgnithxulbjvbtrmkewugadiibfzjngn ; /usr/bin/python3'
Oct 01 13:11:51 compute-0 sudo[104289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:11:51 compute-0 python3[104291]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:11:51 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.15 deep-scrub starts
Oct 01 13:11:51 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.15 deep-scrub ok
Oct 01 13:11:51 compute-0 podman[104292]: 2025-10-01 13:11:51.318350982 +0000 UTC m=+0.043895756 container create 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:11:51 compute-0 systemd[1]: Started libpod-conmon-1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69.scope.
Oct 01 13:11:51 compute-0 podman[104292]: 2025-10-01 13:11:51.298925381 +0000 UTC m=+0.024470245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct 01 13:11:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d400193d2f521256c718bd34fc038e2731419a2761b24e6fa11a42489f799b4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d400193d2f521256c718bd34fc038e2731419a2761b24e6fa11a42489f799b4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:11:51 compute-0 podman[104292]: 2025-10-01 13:11:51.446538773 +0000 UTC m=+0.172083597 container init 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:11:51 compute-0 podman[104292]: 2025-10-01 13:11:51.454309771 +0000 UTC m=+0.179854555 container start 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:11:51 compute-0 podman[104292]: 2025-10-01 13:11:51.457797673 +0000 UTC m=+0.183342537 container attach 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 13:11:51 compute-0 beautiful_saha[104306]: {
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "user_id": "openstack",
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "display_name": "openstack",
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "email": "",
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "suspended": 0,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "max_buckets": 1000,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "subusers": [],
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "keys": [
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         {
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:             "user": "openstack",
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:             "access_key": "9YAP2ZLHAPGVIL9ZU6WF",
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:             "secret_key": "rF7sUO0A5DaWbo1mPhff1hc6i3JP5EljOueYTCnc"
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         }
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     ],
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "swift_keys": [],
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "caps": [],
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "op_mask": "read, write, delete",
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "default_placement": "",
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "default_storage_class": "",
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "placement_tags": [],
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "bucket_quota": {
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "enabled": false,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "check_on_raw": false,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "max_size": -1,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "max_size_kb": 0,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "max_objects": -1
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     },
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "user_quota": {
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "enabled": false,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "check_on_raw": false,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "max_size": -1,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "max_size_kb": 0,
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:         "max_objects": -1
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     },
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "temp_url_keys": [],
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "type": "rgw",
Oct 01 13:11:51 compute-0 beautiful_saha[104306]:     "mfa_ids": []
Oct 01 13:11:51 compute-0 beautiful_saha[104306]: }
Oct 01 13:11:51 compute-0 beautiful_saha[104306]: 
Oct 01 13:11:51 compute-0 systemd[1]: libpod-1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69.scope: Deactivated successfully.
Oct 01 13:11:51 compute-0 podman[104391]: 2025-10-01 13:11:51.741026574 +0000 UTC m=+0.041666964 container died 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:11:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v153: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Oct 01 13:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d400193d2f521256c718bd34fc038e2731419a2761b24e6fa11a42489f799b4c-merged.mount: Deactivated successfully.
Oct 01 13:11:51 compute-0 podman[104391]: 2025-10-01 13:11:51.802001645 +0000 UTC m=+0.102642025 container remove 1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69 (image=quay.io/ceph/ceph:v18, name=beautiful_saha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 13:11:51 compute-0 systemd[1]: libpod-conmon-1e9bf876342b76ee8ea7fee6c02f3e84d1aee168cc451a5682359721d5be7d69.scope: Deactivated successfully.
Oct 01 13:11:51 compute-0 sudo[104289]: pam_unix(sudo:session): session closed for user root
Oct 01 13:11:52 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct 01 13:11:52 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct 01 13:11:52 compute-0 sshd-session[104406]: Invalid user bala from 156.236.31.46 port 43914
Oct 01 13:11:52 compute-0 sshd-session[104406]: Received disconnect from 156.236.31.46 port 43914:11: Bye Bye [preauth]
Oct 01 13:11:52 compute-0 sshd-session[104406]: Disconnected from invalid user bala 156.236.31.46 port 43914 [preauth]
Oct 01 13:11:52 compute-0 ceph-mon[74802]: 5.15 deep-scrub starts
Oct 01 13:11:52 compute-0 ceph-mon[74802]: 5.15 deep-scrub ok
Oct 01 13:11:52 compute-0 ceph-mon[74802]: pgmap v153: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Oct 01 13:11:53 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct 01 13:11:53 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct 01 13:11:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 13:11:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v154: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s; 56 B/s, 0 objects/s recovering
Oct 01 13:11:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct 01 13:11:53 compute-0 ceph-mon[74802]: 5.14 scrub starts
Oct 01 13:11:53 compute-0 ceph-mon[74802]: 5.14 scrub ok
Oct 01 13:11:53 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:11:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:11:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct 01 13:11:53 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct 01 13:11:53 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev c7c0e5d0-af8b-4a47-b75c-2afe630deb55 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 01 13:11:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 13:11:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:11:54 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 01 13:11:54 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 01 13:11:54 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct 01 13:11:54 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct 01 13:11:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct 01 13:11:54 compute-0 ceph-mon[74802]: 7.16 scrub starts
Oct 01 13:11:54 compute-0 ceph-mon[74802]: 7.16 scrub ok
Oct 01 13:11:54 compute-0 ceph-mon[74802]: pgmap v154: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s; 56 B/s, 0 objects/s recovering
Oct 01 13:11:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:11:54 compute-0 ceph-mon[74802]: osdmap e71: 3 total, 3 up, 3 in
Oct 01 13:11:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:11:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:11:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct 01 13:11:54 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct 01 13:11:54 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev c03bb0ae-eb08-41a5-b304-87f964af89ac (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 01 13:11:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 13:11:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:11:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:11:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v157: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Oct 01 13:11:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 13:11:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 13:11:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct 01 13:11:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:11:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:11:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:11:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct 01 13:11:55 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct 01 13:11:55 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev 07824996-d13c-4845-926d-a95fdc21b6a1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 01 13:11:55 compute-0 ceph-mon[74802]: 7.17 scrub starts
Oct 01 13:11:55 compute-0 ceph-mon[74802]: 7.17 scrub ok
Oct 01 13:11:55 compute-0 ceph-mon[74802]: 2.f scrub starts
Oct 01 13:11:55 compute-0 ceph-mon[74802]: 2.f scrub ok
Oct 01 13:11:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:11:55 compute-0 ceph-mon[74802]: osdmap e72: 3 total, 3 up, 3 in
Oct 01 13:11:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:11:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct 01 13:11:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:11:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct 01 13:11:56 compute-0 ceph-mon[74802]: pgmap v157: 181 pgs: 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Oct 01 13:11:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:11:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:11:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:11:56 compute-0 ceph-mon[74802]: osdmap e73: 3 total, 3 up, 3 in
Oct 01 13:11:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 01 13:11:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:11:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct 01 13:11:56 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct 01 13:11:56 compute-0 ceph-mgr[75103]: [progress INFO root] update: starting ev df1830a6-f900-422d-bce0-e21f9b42868d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 01 13:11:56 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev c7c0e5d0-af8b-4a47-b75c-2afe630deb55 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct 01 13:11:56 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event c7c0e5d0-af8b-4a47-b75c-2afe630deb55 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct 01 13:11:56 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev c03bb0ae-eb08-41a5-b304-87f964af89ac (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct 01 13:11:56 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event c03bb0ae-eb08-41a5-b304-87f964af89ac (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct 01 13:11:56 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev 07824996-d13c-4845-926d-a95fdc21b6a1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct 01 13:11:56 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event 07824996-d13c-4845-926d-a95fdc21b6a1 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct 01 13:11:56 compute-0 ceph-mgr[75103]: [progress INFO root] complete: finished ev df1830a6-f900-422d-bce0-e21f9b42868d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct 01 13:11:56 compute-0 ceph-mgr[75103]: [progress INFO root] Completed event df1830a6-f900-422d-bce0-e21f9b42868d (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 73 pg[9.0( v 70'389 (0'0,70'389] local-lis/les=41/42 n=177 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=73 pruub=14.469636917s) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 70'388 mlcod 70'388 active pruub 130.869049072s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 73 pg[8.0( v 40'4 (0'0,40'4] local-lis/les=39/40 n=4 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=73 pruub=12.452169418s) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 40'3 mlcod 40'3 active pruub 128.851531982s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.0( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=73 pruub=12.452169418s) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 40'3 mlcod 0'0 unknown pruub 128.851531982s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.0( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=73 pruub=14.469636917s) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 70'388 mlcod 0'0 unknown pruub 130.869049072s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.9( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.7( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.5( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.17( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.8( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.3( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.a( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.e( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.b( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.f( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.2( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.16( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.d( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.c( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.14( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.11( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.6( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.15( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.12( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.4( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.13( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.10( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.18( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.19( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1a( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1b( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1c( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1d( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1e( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[9.1f( v 70'389 lc 0'0 (0'0,70'389] local-lis/les=41/42 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.4( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.8( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.a( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.5( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.11( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.13( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.b( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.3( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.10( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.12( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.7( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.2( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.6( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.f( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.e( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.15( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.14( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.d( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.c( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1( v 40'4 (0'0,40'4] local-lis/les=39/40 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.16( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.17( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.18( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.19( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1a( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1b( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1c( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1d( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1e( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.1f( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 74 pg[8.9( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 01 13:11:57 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 01 13:11:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v160: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 13:11:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct 01 13:11:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct 01 13:11:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 01 13:11:57 compute-0 ceph-mon[74802]: osdmap e74: 3 total, 3 up, 3 in
Oct 01 13:11:57 compute-0 ceph-mon[74802]: 7.19 scrub starts
Oct 01 13:11:57 compute-0 ceph-mon[74802]: 7.19 scrub ok
Oct 01 13:11:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 01 13:11:57 compute-0 ceph-mgr[75103]: [progress INFO root] Writing back 16 completed events
Oct 01 13:11:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct 01 13:11:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:11:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:11:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct 01 13:11:57 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[11.0( v 70'2 (0'0,70'2] local-lis/les=45/46 n=2 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=9.853425026s) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 70'1 mlcod 70'1 active pruub 126.967704773s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[11.0( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=75 pruub=9.853425026s) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 70'1 mlcod 0'0 unknown pruub 126.967704773s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.14( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.16( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.0( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 70'388 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.17( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.3( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.e( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.2( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.8( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.a( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.0( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 40'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.7( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.5( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.4( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1a( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.19( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1e( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.10( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.a( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.1a( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[9.12( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=41/41 les/c/f=42/42/0 sis=73) [1] r=0 lpr=73 pi=[41,73)/1 crt=70'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 75 pg[8.13( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=39/39 les/c/f=40/40/0 sis=73) [1] r=0 lpr=73 pi=[39,73)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:57 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 75 pg[10.0( v 70'64 (0'0,70'64] local-lis/les=43/44 n=8 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=75 pruub=15.750879288s) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 70'63 mlcod 70'63 active pruub 127.946723938s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 75 pg[10.0( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=75 pruub=15.750879288s) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 70'63 mlcod 0'0 unknown pruub 127.946723938s@ mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 01 13:11:58 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 01 13:11:58 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 01 13:11:58 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 01 13:11:58 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Oct 01 13:11:58 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Oct 01 13:11:58 compute-0 ceph-mon[74802]: pgmap v160: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:11:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 01 13:11:58 compute-0 ceph-mon[74802]: osdmap e75: 3 total, 3 up, 3 in
Oct 01 13:11:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:11:58 compute-0 ceph-mon[74802]: 5.10 scrub starts
Oct 01 13:11:58 compute-0 ceph-mon[74802]: 5.10 scrub ok
Oct 01 13:11:58 compute-0 ceph-mon[74802]: 7.1d scrub starts
Oct 01 13:11:58 compute-0 ceph-mon[74802]: 7.1d scrub ok
Oct 01 13:11:58 compute-0 ceph-mon[74802]: 2.11 deep-scrub starts
Oct 01 13:11:58 compute-0 ceph-mon[74802]: 2.11 deep-scrub ok
Oct 01 13:11:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct 01 13:11:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct 01 13:11:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.16( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.17( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.15( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.14( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.13( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.2( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=1 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=45/46 n=1 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.f( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.e( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.d( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.b( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.c( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.8( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.a( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.3( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.5( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.4( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.6( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.7( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.18( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1a( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1b( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1c( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1d( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1e( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1f( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.11( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.12( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.9( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1e( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.19( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1b( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.b( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.10( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.a( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.19( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=45/46 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.d( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.11( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.13( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.12( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.10( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1f( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1c( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1a( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.18( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1d( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.6( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.5( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.4( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.8( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.f( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.7( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.9( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.16( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.c( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.e( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.2( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.3( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.14( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.15( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.16( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.17( v 70'64 lc 0'0 (0'0,70'64] local-lis/les=43/44 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.19( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1b( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1e( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.14( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.2( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.13( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.17( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.0( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 70'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.c( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.a( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.8( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.d( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:58 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.3( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.5( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.4( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.18( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.6( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.15( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1c( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1a( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1d( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.7( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.1e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.12( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.11( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.9( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.10( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 76 pg[11.19( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=45/45 les/c/f=46/46/0 sis=75) [1] r=0 lpr=75 pi=[45,75)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.a( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.11( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.12( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.d( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.b( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.13( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.10( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1f( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1a( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1c( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.18( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1d( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.6( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.5( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.8( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.7( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.c( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.0( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 70'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.e( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.2( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.3( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.14( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.16( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.15( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.17( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.4( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.9( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 76 pg[10.f( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=43/43 les/c/f=44/44/0 sis=75) [2] r=0 lpr=75 pi=[43,75)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:11:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:11:59 compute-0 ceph-mon[74802]: osdmap e76: 3 total, 3 up, 3 in
Oct 01 13:12:00 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct 01 13:12:00 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct 01 13:12:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:00 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct 01 13:12:00 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct 01 13:12:00 compute-0 ceph-mon[74802]: pgmap v163: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:00 compute-0 ceph-mon[74802]: 7.1e scrub starts
Oct 01 13:12:00 compute-0 ceph-mon[74802]: 7.1e scrub ok
Oct 01 13:12:00 compute-0 ceph-mon[74802]: 5.5 scrub starts
Oct 01 13:12:00 compute-0 ceph-mon[74802]: 5.5 scrub ok
Oct 01 13:12:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:01 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct 01 13:12:01 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct 01 13:12:02 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct 01 13:12:02 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct 01 13:12:02 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 01 13:12:02 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 01 13:12:03 compute-0 ceph-mon[74802]: pgmap v164: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 86 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:03 compute-0 ceph-mon[74802]: 5.17 scrub starts
Oct 01 13:12:03 compute-0 ceph-mon[74802]: 5.17 scrub ok
Oct 01 13:12:03 compute-0 ceph-mon[74802]: 5.1d deep-scrub starts
Oct 01 13:12:03 compute-0 ceph-mon[74802]: 5.1d deep-scrub ok
Oct 01 13:12:03 compute-0 ceph-mon[74802]: 5.7 scrub starts
Oct 01 13:12:03 compute-0 ceph-mon[74802]: 5.7 scrub ok
Oct 01 13:12:03 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct 01 13:12:03 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct 01 13:12:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 13:12:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:12:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 13:12:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:12:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct 01 13:12:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 01 13:12:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 13:12:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:12:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct 01 13:12:04 compute-0 ceph-mon[74802]: 2.1b scrub starts
Oct 01 13:12:04 compute-0 ceph-mon[74802]: 2.1b scrub ok
Oct 01 13:12:04 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:12:04 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:12:04 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 01 13:12:04 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:12:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:12:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:12:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 01 13:12:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:12:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct 01 13:12:04 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.17( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.968307495s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150558472s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.17( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.968185425s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150558472s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951469421s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.133880615s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951457024s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.133880615s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951417923s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.133880615s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951336861s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.133880615s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.15( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.968358994s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151153564s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.943604469s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.126419067s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.15( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.968303680s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151153564s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.943548203s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.126419067s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950979233s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.133895874s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950869560s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.133895874s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.14( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.967068672s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150268555s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.14( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.967015266s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150268555s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951047897s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.134445190s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.951011658s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134445190s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.2( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.966702461s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150283813s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.2( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.966644287s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150283813s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.966640472s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150650024s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950768471s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.134841919s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=75/76 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.966590881s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150650024s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950712204s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134841919s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950326920s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.134735107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950283051s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134735107s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.965979576s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150711060s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.965918541s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150711060s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950703621s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135711670s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.950655937s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135711670s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949704170s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.134857178s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949651718s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134857178s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.965412140s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150772095s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.965373993s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150772095s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949386597s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.134887695s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949323654s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134887695s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.d( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.964961052s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150726318s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.d( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.964921951s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150726318s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949278831s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.135162354s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.949225426s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135162354s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.964778900s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150802612s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.964682579s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150802612s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948793411s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.135162354s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948756218s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135162354s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947757721s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.134475708s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947700500s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.134475708s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948555946s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135604858s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.e( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947916031s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135025024s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.8( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963786125s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.150909424s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.e( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947872162s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135025024s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948415756s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135604858s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.8( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963736534s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.150909424s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948323250s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135681152s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948291779s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135681152s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948054314s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.135711670s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.3( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963291168s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151000977s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948015213s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.135711670s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.3( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963233948s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151000977s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.4( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963236809s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151077271s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.4( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.963199615s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151077271s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948027611s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.136093140s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948133469s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.136215210s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948172569s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.136291504s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948074341s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136215210s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948130608s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136291504s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.6( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962885857s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151153564s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.6( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962850571s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151153564s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947865486s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.136337280s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948077202s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.136581421s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947834015s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136337280s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=73/75 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.948032379s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136581421s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947632790s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.136367798s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.18( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962381363s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151123047s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947601318s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136367798s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.18( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962336540s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151123047s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947252274s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136093140s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1a( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962396622s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151229858s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962006569s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151168823s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1a( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.962041855s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151229858s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947217941s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.136489868s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.947071075s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.136535645s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.946246147s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136489868s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.946340561s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.136535645s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1b( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.960991859s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151168823s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.946080208s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137268066s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945987701s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137268066s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959937096s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151321411s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1e( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959891319s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151321411s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1c( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959763527s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151214600s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1c( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959650040s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151214600s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959566116s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151290894s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945618629s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137359619s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.1f( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.959533691s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151290894s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1d( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945562363s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137359619s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945498466s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137145996s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945451736s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137466431s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945230484s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137145996s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945377350s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137466431s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945072174s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137496948s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945087433s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137573242s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.945037842s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137573242s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944945335s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137496948s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944779396s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137619019s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944636345s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137619019s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944744110s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137680054s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.11( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.958064079s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151336670s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.9( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.957771301s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151351929s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.9( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.957732201s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151351929s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.12( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.958518028s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151321411s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.12( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.957483292s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151321411s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.11( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.11( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.957916260s) [2] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151336670s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.944570541s) [2] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137680054s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.943033218s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137680054s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.942840576s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137680054s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.10( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1a( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.942263603s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 133.137725830s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.19( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.955919266s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151412964s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.5( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.19( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.955835342s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151412964s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[8.1a( v 40'4 (0'0,40'4] local-lis/les=73/75 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.942021370s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137725830s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.941924095s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 133.137832642s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77 pruub=9.941827774s) [0] r=-1 lpr=77 pi=[73,77)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.137832642s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.b( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.10( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.955692291s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active pruub 134.151367188s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.4( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.15( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[11.10( v 70'2 (0'0,70'2] local-lis/les=75/76 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.954754829s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.151367188s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.15( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.14( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.2( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.6( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.3( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.2( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.9( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.6( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.d( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.8( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.d( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.9( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.4( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1b( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1c( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1e( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.f( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.12( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.e( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.12( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.b( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.18( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.1b( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.e( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.f( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.c( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1a( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.1f( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.1( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.1c( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.1( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[11.11( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.3( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[8.11( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1e( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.941148758s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.193389893s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1e( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.941083908s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.193389893s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.19( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.940863609s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.193237305s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.b( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.947177887s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.199981689s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.b( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.947136879s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.199981689s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.d( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.946795464s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.199844360s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.d( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.946720123s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.199844360s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.18( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.19( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.940765381s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.193237305s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.17( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.13( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945632935s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.199996948s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.12( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945360184s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.199844360s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.13( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945519447s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.199996948s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.11( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944979668s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.199783325s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.11( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944931984s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.199783325s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.10( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945047379s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200057983s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.10( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944984436s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200057983s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.12( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945320129s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.199844360s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1a( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944793701s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200164795s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.14( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.7( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944931984s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200515747s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.7( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944884300s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200515747s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.6( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944594383s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200347900s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.6( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944548607s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200347900s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.1f( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.4( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.945033073s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.201034546s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.8( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944325447s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200469971s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.8( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944270134s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200469971s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1a( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944096565s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200164795s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.f( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944956779s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.201202393s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.f( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944915771s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.201202393s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.1d( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.b( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.4( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.944943428s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.201034546s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.e( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943584442s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.200607300s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943605423s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200698853s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.e( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943493843s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.200607300s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943561554s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200698853s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.2( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943550110s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200729370s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.2( v 70'64 (0'0,70'64] local-lis/les=75/76 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943508148s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200729370s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.14( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943443298s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.200836182s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.14( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943375587s) [1] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.200836182s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.16( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943263054s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200897217s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.16( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943226814s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200897217s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.15( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943150520s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.200912476s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.15( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.942918777s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.200912476s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.9( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943267822s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 70'64 active pruub 129.201049805s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.9( v 76'65 (0'0,76'65] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.942856789s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 70'64 mlcod 0'0 unknown NOTIFY pruub 129.201049805s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.1d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.19( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.19( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.13( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.11( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[8.1a( empty local-lis/les=0/0 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.17( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.943011284s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active pruub 129.200988770s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[9.1b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 77 pg[10.17( v 70'64 (0'0,70'64] local-lis/les=75/76 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77 pruub=10.941424370s) [0] r=-1 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.200988770s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.10( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[11.10( empty local-lis/les=0/0 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.12( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.d( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.1a( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.f( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.7( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.8( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.2( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.4( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.1( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.15( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 77 pg[10.14( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.9( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 77 pg[10.17( empty local-lis/les=0/0 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:04 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct 01 13:12:04 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct 01 13:12:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct 01 13:12:05 compute-0 ceph-mon[74802]: pgmap v165: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:12:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:12:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 01 13:12:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:12:05 compute-0 ceph-mon[74802]: osdmap e77: 3 total, 3 up, 3 in
Oct 01 13:12:05 compute-0 ceph-mon[74802]: 5.3 scrub starts
Oct 01 13:12:05 compute-0 ceph-mon[74802]: 5.3 scrub ok
Oct 01 13:12:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct 01 13:12:05 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1a( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.11( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.11( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.3( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.3( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.19( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.1d( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.5( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.5( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.18( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.b( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.12( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1f( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.11( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1c( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1e( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=77/78 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.1b( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.9( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.d( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.8( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.d( v 70'2 lc 0'0 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.15( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=77/78 n=1 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.2( v 70'2 (0'0,70'2] local-lis/les=77/78 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[11.3( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [2] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 78 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [2] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.b( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[73,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.13( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.14( v 76'65 lc 70'54 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.11( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.f( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.1a( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.12( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.2( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.b( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.6( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 78 pg[10.10( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [1] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.8( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.4( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.14( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.4( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.17( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.9( v 76'65 lc 70'56 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.15( v 76'65 lc 70'46 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.d( v 76'65 lc 70'50 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.7( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.f( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.e( v 76'65 lc 70'48 (0'0,76'65] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=76'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.f( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.e( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.1e( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.1( v 70'2 (0'0,70'2] local-lis/les=77/78 n=1 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.17( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.e( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.16( v 70'64 (0'0,70'64] local-lis/les=77/78 n=0 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[10.1( v 70'64 (0'0,70'64] local-lis/les=77/78 n=1 ec=75/43 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.1d( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.19( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.1a( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.6( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=77/78 n=0 ec=73/39 lis/c=73/73 les/c/f=75/75/0 sis=77) [0] r=0 lpr=77 pi=[73,77)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 78 pg[11.10( v 70'2 (0'0,70'2] local-lis/les=77/78 n=0 ec=75/45 lis/c=75/75 les/c/f=76/76/0 sis=77) [0] r=0 lpr=77 pi=[75,77)/1 crt=70'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:05 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.11 deep-scrub starts
Oct 01 13:12:05 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.11 deep-scrub ok
Oct 01 13:12:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct 01 13:12:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 01 13:12:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct 01 13:12:06 compute-0 ceph-mon[74802]: osdmap e78: 3 total, 3 up, 3 in
Oct 01 13:12:06 compute-0 ceph-mon[74802]: 5.11 deep-scrub starts
Oct 01 13:12:06 compute-0 ceph-mon[74802]: 5.11 deep-scrub ok
Oct 01 13:12:06 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 01 13:12:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 01 13:12:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct 01 13:12:06 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 79 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=78) [0]/[1] async=[0] r=0 lpr=78 pi=[73,78)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 01 13:12:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 01 13:12:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct 01 13:12:07 compute-0 ceph-mon[74802]: pgmap v168: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 01 13:12:07 compute-0 ceph-mon[74802]: osdmap e79: 3 total, 3 up, 3 in
Oct 01 13:12:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct 01 13:12:07 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.944107056s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227615356s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.944192886s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227813721s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.943981171s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227615356s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.944096565s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227813721s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.943475723s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227645874s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.943406105s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227645874s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.943039894s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227432251s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.942906380s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227432251s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.942847252s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227722168s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941985130s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227172852s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941921234s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227172852s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.942126274s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227569580s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941932678s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227569580s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941720963s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227447510s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941661835s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227447510s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.942129135s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227722168s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941451073s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227752686s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941294670s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227722168s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941205978s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227722168s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940401077s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227157593s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.941008568s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227752686s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940311432s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227157593s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940115929s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227111816s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940026283s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227111816s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940593719s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227874756s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940309525s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227096558s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940491676s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227874756s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940132141s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.227661133s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.940086365s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227661133s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.939704895s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.227096558s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.933441162s) [0] async=[0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 141.221206665s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 80 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=78/79 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80 pruub=14.933380127s) [0] r=-1 lpr=80 pi=[73,80)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 141.221206665s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 80 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:07 compute-0 sshd-session[104408]: Accepted publickey for zuul from 192.168.122.30 port 34698 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:12:07 compute-0 systemd-logind[818]: New session 34 of user zuul.
Oct 01 13:12:07 compute-0 systemd[1]: Started Session 34 of User zuul.
Oct 01 13:12:07 compute-0 sshd-session[104408]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:12:07 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 01 13:12:07 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 01 13:12:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 16 peering, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 1001 B/s, 24 objects/s recovering
Oct 01 13:12:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct 01 13:12:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct 01 13:12:08 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct 01 13:12:08 compute-0 ceph-mon[74802]: 5.1b scrub starts
Oct 01 13:12:08 compute-0 ceph-mon[74802]: 5.1b scrub ok
Oct 01 13:12:08 compute-0 ceph-mon[74802]: osdmap e80: 3 total, 3 up, 3 in
Oct 01 13:12:08 compute-0 ceph-mon[74802]: 5.2 scrub starts
Oct 01 13:12:08 compute-0 ceph-mon[74802]: 5.2 scrub ok
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.1b( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.1d( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.3( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.1( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.d( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.9( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.b( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.5( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.11( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 81 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=78/73 les/c/f=79/75/0 sis=80) [0] r=0 lpr=80 pi=[73,80)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:08 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Oct 01 13:12:08 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Oct 01 13:12:08 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct 01 13:12:08 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct 01 13:12:08 compute-0 python3.9[104561]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:12:09 compute-0 ceph-mon[74802]: pgmap v171: 305 pgs: 16 peering, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 1001 B/s, 24 objects/s recovering
Oct 01 13:12:09 compute-0 ceph-mon[74802]: osdmap e81: 3 total, 3 up, 3 in
Oct 01 13:12:09 compute-0 ceph-mon[74802]: 2.15 deep-scrub starts
Oct 01 13:12:09 compute-0 ceph-mon[74802]: 2.15 deep-scrub ok
Oct 01 13:12:09 compute-0 ceph-mon[74802]: 2.1d scrub starts
Oct 01 13:12:09 compute-0 ceph-mon[74802]: 2.1d scrub ok
Oct 01 13:12:09 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct 01 13:12:09 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct 01 13:12:09 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 01 13:12:09 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 01 13:12:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 16 peering, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 848 B/s, 20 objects/s recovering
Oct 01 13:12:10 compute-0 ceph-mon[74802]: 5.12 scrub starts
Oct 01 13:12:10 compute-0 ceph-mon[74802]: 5.12 scrub ok
Oct 01 13:12:10 compute-0 ceph-mon[74802]: 2.1c scrub starts
Oct 01 13:12:10 compute-0 ceph-mon[74802]: 2.1c scrub ok
Oct 01 13:12:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:10 compute-0 sudo[104777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imqnmrdqxqvtismunaiegyiqjsvrsxjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324330.0721493-32-144758599667017/AnsiballZ_command.py'
Oct 01 13:12:10 compute-0 sudo[104777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:12:10 compute-0 python3.9[104779]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:12:11 compute-0 ceph-mon[74802]: pgmap v173: 305 pgs: 16 peering, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 848 B/s, 20 objects/s recovering
Oct 01 13:12:11 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 01 13:12:11 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 01 13:12:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 16 peering, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 667 B/s, 16 objects/s recovering
Oct 01 13:12:12 compute-0 ceph-mon[74802]: 5.16 scrub starts
Oct 01 13:12:12 compute-0 ceph-mon[74802]: 5.16 scrub ok
Oct 01 13:12:12 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 01 13:12:12 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 01 13:12:13 compute-0 ceph-mon[74802]: pgmap v174: 305 pgs: 16 peering, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 667 B/s, 16 objects/s recovering
Oct 01 13:12:13 compute-0 ceph-mon[74802]: 2.b scrub starts
Oct 01 13:12:13 compute-0 ceph-mon[74802]: 2.b scrub ok
Oct 01 13:12:13 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 01 13:12:13 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 01 13:12:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 519 B/s, 12 objects/s recovering
Oct 01 13:12:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct 01 13:12:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 01 13:12:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct 01 13:12:14 compute-0 ceph-mon[74802]: 2.8 scrub starts
Oct 01 13:12:14 compute-0 ceph-mon[74802]: 2.8 scrub ok
Oct 01 13:12:14 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 01 13:12:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 01 13:12:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct 01 13:12:14 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct 01 13:12:14 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 01 13:12:14 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 01 13:12:14 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 01 13:12:14 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 01 13:12:15 compute-0 ceph-mon[74802]: pgmap v175: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 519 B/s, 12 objects/s recovering
Oct 01 13:12:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 01 13:12:15 compute-0 ceph-mon[74802]: osdmap e82: 3 total, 3 up, 3 in
Oct 01 13:12:15 compute-0 ceph-mon[74802]: 5.9 scrub starts
Oct 01 13:12:15 compute-0 ceph-mon[74802]: 5.9 scrub ok
Oct 01 13:12:15 compute-0 ceph-mon[74802]: 2.1f scrub starts
Oct 01 13:12:15 compute-0 ceph-mon[74802]: 2.1f scrub ok
Oct 01 13:12:15 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Oct 01 13:12:15 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Oct 01 13:12:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct 01 13:12:15 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 01 13:12:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct 01 13:12:16 compute-0 ceph-mon[74802]: 5.13 deep-scrub starts
Oct 01 13:12:16 compute-0 ceph-mon[74802]: 5.13 deep-scrub ok
Oct 01 13:12:16 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 01 13:12:16 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 01 13:12:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct 01 13:12:16 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct 01 13:12:16 compute-0 sudo[104803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:12:16 compute-0 sudo[104803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:16 compute-0 sudo[104803]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:16 compute-0 sudo[104828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:12:16 compute-0 sudo[104828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:16 compute-0 sudo[104828]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:16 compute-0 sudo[104854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:12:16 compute-0 sudo[104854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:16 compute-0 sudo[104854]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:16 compute-0 sudo[104879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:12:16 compute-0 sudo[104879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:16 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct 01 13:12:16 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct 01 13:12:17 compute-0 sudo[104879]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:12:17 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:12:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:12:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:12:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:12:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f4946727-9f61-4e83-8ac5-fe8cc7c5ff30 does not exist
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ac4532e0-3dbb-4be0-852f-02cd916e9dbd does not exist
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e7a6cdfd-3696-466a-9d6d-2d77f9433e2f does not exist
Oct 01 13:12:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:12:17 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:12:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:12:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:12:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:12:17 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:12:17 compute-0 ceph-mon[74802]: pgmap v177: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 01 13:12:17 compute-0 ceph-mon[74802]: osdmap e83: 3 total, 3 up, 3 in
Oct 01 13:12:17 compute-0 ceph-mon[74802]: 5.4 scrub starts
Oct 01 13:12:17 compute-0 ceph-mon[74802]: 5.4 scrub ok
Oct 01 13:12:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:12:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:12:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:12:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:12:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:12:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:12:17 compute-0 sudo[104943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:12:17 compute-0 sudo[104943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:17 compute-0 sudo[104943]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:17 compute-0 sudo[104968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:12:17 compute-0 sudo[104968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:17 compute-0 sudo[104968]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:17 compute-0 sudo[104993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:12:17 compute-0 sudo[104993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:17 compute-0 sudo[104993]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:17 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 01 13:12:17 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 01 13:12:17 compute-0 sudo[105018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:12:17 compute-0 sudo[105018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:17 compute-0 sudo[104777]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:17 compute-0 podman[105107]: 2025-10-01 13:12:17.734802778 +0000 UTC m=+0.054842126 container create 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct 01 13:12:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 01 13:12:17 compute-0 systemd[1]: Started libpod-conmon-91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d.scope.
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:12:17 compute-0 podman[105107]: 2025-10-01 13:12:17.704994304 +0000 UTC m=+0.025033692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:12:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:12:17 compute-0 podman[105107]: 2025-10-01 13:12:17.820260551 +0000 UTC m=+0.140299979 container init 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:12:17 compute-0 podman[105107]: 2025-10-01 13:12:17.82585762 +0000 UTC m=+0.145896968 container start 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:12:17 compute-0 podman[105107]: 2025-10-01 13:12:17.828789034 +0000 UTC m=+0.148828472 container attach 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 13:12:17 compute-0 competent_chatterjee[105123]: 167 167
Oct 01 13:12:17 compute-0 systemd[1]: libpod-91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d.scope: Deactivated successfully.
Oct 01 13:12:17 compute-0 conmon[105123]: conmon 91864f5a045eaba4211f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d.scope/container/memory.events
Oct 01 13:12:17 compute-0 podman[105107]: 2025-10-01 13:12:17.83301066 +0000 UTC m=+0.153050008 container died 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:12:17 compute-0 sshd-session[104411]: Connection closed by 192.168.122.30 port 34698
Oct 01 13:12:17 compute-0 sshd-session[104408]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d652e686bf8b6f8faac64841a5821f552b1c37b2a66b53293ebcd4d9f0405db2-merged.mount: Deactivated successfully.
Oct 01 13:12:17 compute-0 systemd-logind[818]: Session 34 logged out. Waiting for processes to exit.
Oct 01 13:12:17 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Oct 01 13:12:17 compute-0 systemd[1]: session-34.scope: Consumed 8.571s CPU time.
Oct 01 13:12:17 compute-0 systemd-logind[818]: Removed session 34.
Oct 01 13:12:17 compute-0 podman[105107]: 2025-10-01 13:12:17.869556309 +0000 UTC m=+0.189595667 container remove 91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:12:17 compute-0 systemd[1]: libpod-conmon-91864f5a045eaba4211fbc754df4627142b626584e9c9acd17ded200096baf0d.scope: Deactivated successfully.
Oct 01 13:12:17 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Oct 01 13:12:17 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Oct 01 13:12:18 compute-0 podman[105145]: 2025-10-01 13:12:18.103456402 +0000 UTC m=+0.086942683 container create 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:12:18 compute-0 podman[105145]: 2025-10-01 13:12:18.043696109 +0000 UTC m=+0.027182420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:12:18 compute-0 systemd[1]: Started libpod-conmon-7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8.scope.
Oct 01 13:12:18 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct 01 13:12:18 compute-0 ceph-mon[74802]: 5.1 scrub starts
Oct 01 13:12:18 compute-0 ceph-mon[74802]: 5.1 scrub ok
Oct 01 13:12:18 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 01 13:12:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 01 13:12:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct 01 13:12:18 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct 01 13:12:18 compute-0 podman[105145]: 2025-10-01 13:12:18.230427664 +0000 UTC m=+0.213913985 container init 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:12:18 compute-0 podman[105145]: 2025-10-01 13:12:18.245249697 +0000 UTC m=+0.228735928 container start 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:12:18 compute-0 podman[105145]: 2025-10-01 13:12:18.248370567 +0000 UTC m=+0.231857238 container attach 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:12:18 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 01 13:12:18 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 01 13:12:19 compute-0 ceph-mon[74802]: pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:19 compute-0 ceph-mon[74802]: 5.1c deep-scrub starts
Oct 01 13:12:19 compute-0 ceph-mon[74802]: 5.1c deep-scrub ok
Oct 01 13:12:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 01 13:12:19 compute-0 ceph-mon[74802]: osdmap e84: 3 total, 3 up, 3 in
Oct 01 13:12:19 compute-0 optimistic_bouman[105163]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:12:19 compute-0 optimistic_bouman[105163]: --> relative data size: 1.0
Oct 01 13:12:19 compute-0 optimistic_bouman[105163]: --> All data devices are unavailable
Oct 01 13:12:19 compute-0 systemd[1]: libpod-7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8.scope: Deactivated successfully.
Oct 01 13:12:19 compute-0 podman[105145]: 2025-10-01 13:12:19.354311588 +0000 UTC m=+1.337797859 container died 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:12:19 compute-0 systemd[1]: libpod-7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8.scope: Consumed 1.057s CPU time.
Oct 01 13:12:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9f1961d0d9be0580f519a6837bded43e5b309d04ea19f05857807d6f701e253-merged.mount: Deactivated successfully.
Oct 01 13:12:19 compute-0 podman[105145]: 2025-10-01 13:12:19.422848331 +0000 UTC m=+1.406334582 container remove 7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:12:19 compute-0 systemd[1]: libpod-conmon-7be42ee4aba5b4da7efbc133cc5e18fadc639f67231c8676500138a7d5369dd8.scope: Deactivated successfully.
Oct 01 13:12:19 compute-0 sudo[105018]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:19 compute-0 sudo[105204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:12:19 compute-0 sudo[105204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:19 compute-0 sudo[105204]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:19 compute-0 sudo[105229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:12:19 compute-0 sudo[105229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:19 compute-0 sudo[105229]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:19 compute-0 sudo[105255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:12:19 compute-0 sudo[105255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:19 compute-0 sudo[105255]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct 01 13:12:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 01 13:12:19 compute-0 sudo[105280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:12:19 compute-0 sudo[105280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:20 compute-0 podman[105345]: 2025-10-01 13:12:20.199878639 +0000 UTC m=+0.050432924 container create c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 13:12:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct 01 13:12:20 compute-0 ceph-mon[74802]: 5.1f scrub starts
Oct 01 13:12:20 compute-0 ceph-mon[74802]: 5.1f scrub ok
Oct 01 13:12:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 01 13:12:20 compute-0 systemd[1]: Started libpod-conmon-c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837.scope.
Oct 01 13:12:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 01 13:12:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct 01 13:12:20 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct 01 13:12:20 compute-0 podman[105345]: 2025-10-01 13:12:20.175435137 +0000 UTC m=+0.025989452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:12:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:12:20 compute-0 podman[105345]: 2025-10-01 13:12:20.288147153 +0000 UTC m=+0.138701438 container init c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:12:20 compute-0 podman[105345]: 2025-10-01 13:12:20.295943763 +0000 UTC m=+0.146498078 container start c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:12:20 compute-0 stoic_edison[105361]: 167 167
Oct 01 13:12:20 compute-0 podman[105345]: 2025-10-01 13:12:20.300410155 +0000 UTC m=+0.150964480 container attach c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:12:20 compute-0 systemd[1]: libpod-c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837.scope: Deactivated successfully.
Oct 01 13:12:20 compute-0 podman[105345]: 2025-10-01 13:12:20.303913277 +0000 UTC m=+0.154467592 container died c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7018ec60c02f03ba14f054ad60af82a2a0bf23c904c25d6e81914b8fa5f2402-merged.mount: Deactivated successfully.
Oct 01 13:12:20 compute-0 podman[105345]: 2025-10-01 13:12:20.349536227 +0000 UTC m=+0.200090512 container remove c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_edison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:12:20 compute-0 systemd[1]: libpod-conmon-c4acdcb7a3279f3d3f59a464da879edb9c54450241e0b95720e3fe2b97145837.scope: Deactivated successfully.
Oct 01 13:12:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:20 compute-0 podman[105386]: 2025-10-01 13:12:20.586505108 +0000 UTC m=+0.062396187 container create fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:12:20 compute-0 systemd[1]: Started libpod-conmon-fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3.scope.
Oct 01 13:12:20 compute-0 podman[105386]: 2025-10-01 13:12:20.563998468 +0000 UTC m=+0.039889547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:12:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:20 compute-0 podman[105386]: 2025-10-01 13:12:20.705235997 +0000 UTC m=+0.181127126 container init fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:12:20 compute-0 podman[105386]: 2025-10-01 13:12:20.716313291 +0000 UTC m=+0.192204360 container start fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 01 13:12:20 compute-0 podman[105386]: 2025-10-01 13:12:20.720903888 +0000 UTC m=+0.196794967 container attach fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:12:21 compute-0 ceph-mon[74802]: pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 01 13:12:21 compute-0 ceph-mon[74802]: osdmap e85: 3 total, 3 up, 3 in
Oct 01 13:12:21 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct 01 13:12:21 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]: {
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:     "0": [
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:         {
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "devices": [
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "/dev/loop3"
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             ],
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_name": "ceph_lv0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_size": "21470642176",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "name": "ceph_lv0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "tags": {
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.cluster_name": "ceph",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.crush_device_class": "",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.encrypted": "0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.osd_id": "0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.type": "block",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.vdo": "0"
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             },
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "type": "block",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "vg_name": "ceph_vg0"
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:         }
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:     ],
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:     "1": [
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:         {
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "devices": [
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "/dev/loop4"
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             ],
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_name": "ceph_lv1",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_size": "21470642176",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "name": "ceph_lv1",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "tags": {
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.cluster_name": "ceph",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.crush_device_class": "",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.encrypted": "0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.osd_id": "1",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.type": "block",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.vdo": "0"
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             },
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "type": "block",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "vg_name": "ceph_vg1"
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:         }
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:     ],
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:     "2": [
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:         {
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "devices": [
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "/dev/loop5"
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             ],
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_name": "ceph_lv2",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_size": "21470642176",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "name": "ceph_lv2",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "tags": {
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.cluster_name": "ceph",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.crush_device_class": "",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.encrypted": "0",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.osd_id": "2",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.type": "block",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:                 "ceph.vdo": "0"
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             },
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "type": "block",
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:             "vg_name": "ceph_vg2"
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:         }
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]:     ]
Oct 01 13:12:21 compute-0 clever_mcnulty[105402]: }
Oct 01 13:12:21 compute-0 systemd[1]: libpod-fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3.scope: Deactivated successfully.
Oct 01 13:12:21 compute-0 podman[105411]: 2025-10-01 13:12:21.566957774 +0000 UTC m=+0.034561667 container died fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-67669ce5016f187b0132f2eaaebbf06888ca3023d8857d46494e68856d381ce1-merged.mount: Deactivated successfully.
Oct 01 13:12:21 compute-0 podman[105411]: 2025-10-01 13:12:21.63872151 +0000 UTC m=+0.106325343 container remove fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mcnulty, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:12:21 compute-0 systemd[1]: libpod-conmon-fa87858deca7c6b4e64735a2317c7b192779a9851c2376adc93a881a8961f5e3.scope: Deactivated successfully.
Oct 01 13:12:21 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Oct 01 13:12:21 compute-0 sudo[105280]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:21 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Oct 01 13:12:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct 01 13:12:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 01 13:12:21 compute-0 sudo[105426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:12:21 compute-0 sudo[105426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:21 compute-0 sudo[105426]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:21 compute-0 sudo[105451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:12:21 compute-0 sudo[105451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:21 compute-0 sudo[105451]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:21 compute-0 sudo[105476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:12:21 compute-0 sudo[105476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:21 compute-0 sudo[105476]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:22 compute-0 sudo[105501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:12:22 compute-0 sudo[105501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct 01 13:12:22 compute-0 ceph-mon[74802]: 2.d scrub starts
Oct 01 13:12:22 compute-0 ceph-mon[74802]: 2.d scrub ok
Oct 01 13:12:22 compute-0 ceph-mon[74802]: 7.18 deep-scrub starts
Oct 01 13:12:22 compute-0 ceph-mon[74802]: 7.18 deep-scrub ok
Oct 01 13:12:22 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 01 13:12:22 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 01 13:12:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct 01 13:12:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct 01 13:12:22 compute-0 podman[105565]: 2025-10-01 13:12:22.430719937 +0000 UTC m=+0.053614715 container create 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 13:12:22 compute-0 systemd[1]: Started libpod-conmon-2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d.scope.
Oct 01 13:12:22 compute-0 podman[105565]: 2025-10-01 13:12:22.405069247 +0000 UTC m=+0.027964085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:12:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:12:22 compute-0 podman[105565]: 2025-10-01 13:12:22.521083578 +0000 UTC m=+0.143978426 container init 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:12:22 compute-0 podman[105565]: 2025-10-01 13:12:22.531420098 +0000 UTC m=+0.154314886 container start 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:12:22 compute-0 bold_jones[105581]: 167 167
Oct 01 13:12:22 compute-0 podman[105565]: 2025-10-01 13:12:22.535350395 +0000 UTC m=+0.158245183 container attach 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:12:22 compute-0 systemd[1]: libpod-2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d.scope: Deactivated successfully.
Oct 01 13:12:22 compute-0 podman[105565]: 2025-10-01 13:12:22.537370089 +0000 UTC m=+0.160264877 container died 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-423b2b325339e4ce00d90d6da4dc50db81d0448b0237167d79c67f52a85bd623-merged.mount: Deactivated successfully.
Oct 01 13:12:22 compute-0 podman[105565]: 2025-10-01 13:12:22.58615354 +0000 UTC m=+0.209048338 container remove 2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:12:22 compute-0 systemd[1]: libpod-conmon-2bcceb7196ad5242e564bd9671e42f2cffd6e093afffaacac7f3ea8709009e4d.scope: Deactivated successfully.
Oct 01 13:12:22 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct 01 13:12:22 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct 01 13:12:22 compute-0 podman[105608]: 2025-10-01 13:12:22.799881818 +0000 UTC m=+0.050983952 container create bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:12:22 compute-0 systemd[1]: Started libpod-conmon-bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d.scope.
Oct 01 13:12:22 compute-0 podman[105608]: 2025-10-01 13:12:22.778465452 +0000 UTC m=+0.029567596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:12:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:12:22 compute-0 podman[105608]: 2025-10-01 13:12:22.916774207 +0000 UTC m=+0.167876331 container init bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:12:22 compute-0 podman[105608]: 2025-10-01 13:12:22.924840905 +0000 UTC m=+0.175943039 container start bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:12:22 compute-0 podman[105608]: 2025-10-01 13:12:22.928841073 +0000 UTC m=+0.179943197 container attach bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:12:22 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Oct 01 13:12:22 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Oct 01 13:12:23 compute-0 sshd-session[105601]: Invalid user ander from 200.7.101.139 port 57280
Oct 01 13:12:23 compute-0 sshd-session[105601]: Received disconnect from 200.7.101.139 port 57280:11: Bye Bye [preauth]
Oct 01 13:12:23 compute-0 sshd-session[105601]: Disconnected from invalid user ander 200.7.101.139 port 57280 [preauth]
Oct 01 13:12:23 compute-0 ceph-mon[74802]: pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 01 13:12:23 compute-0 ceph-mon[74802]: osdmap e86: 3 total, 3 up, 3 in
Oct 01 13:12:23 compute-0 ceph-mon[74802]: 7.9 scrub starts
Oct 01 13:12:23 compute-0 ceph-mon[74802]: 7.9 scrub ok
Oct 01 13:12:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.672709465s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 active pruub 156.260330200s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.672541618s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 156.260330200s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86) [2] r=0 lpr=86 pi=[80,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85) [2] r=0 lpr=86 pi=[73,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670915604s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 active pruub 156.260543823s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670770645s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 156.260543823s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86) [2] r=0 lpr=86 pi=[80,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670593262s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 active pruub 156.260620117s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670250893s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 156.260620117s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670630455s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 active pruub 156.260833740s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 86 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86 pruub=8.670126915s) [2] r=-1 lpr=86 pi=[80,86)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 156.260833740s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86) [2] r=0 lpr=86 pi=[80,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:23 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 85 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.538872719s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.134384155s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=86) [2] r=0 lpr=86 pi=[80,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:23 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 86 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.538788795s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.134384155s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:23 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 85 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539843559s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.135955811s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85) [2] r=0 lpr=86 pi=[73,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:23 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 86 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539805412s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.135955811s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:23 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 85 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.540073395s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.136550903s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:23 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 86 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539706230s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.136550903s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:23 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 85 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539821625s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.137420654s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:23 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 86 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85 pruub=14.539777756s) [2] r=-1 lpr=85 pi=[73,85)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.137420654s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.6( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85) [2] r=0 lpr=86 pi=[73,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 86 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=85) [2] r=0 lpr=86 pi=[73,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct 01 13:12:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 01 13:12:23 compute-0 confident_poitras[105625]: {
Oct 01 13:12:23 compute-0 confident_poitras[105625]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "osd_id": 0,
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "type": "bluestore"
Oct 01 13:12:23 compute-0 confident_poitras[105625]:     },
Oct 01 13:12:23 compute-0 confident_poitras[105625]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "osd_id": 2,
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "type": "bluestore"
Oct 01 13:12:23 compute-0 confident_poitras[105625]:     },
Oct 01 13:12:23 compute-0 confident_poitras[105625]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "osd_id": 1,
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:12:23 compute-0 confident_poitras[105625]:         "type": "bluestore"
Oct 01 13:12:23 compute-0 confident_poitras[105625]:     }
Oct 01 13:12:23 compute-0 confident_poitras[105625]: }
Oct 01 13:12:24 compute-0 systemd[1]: libpod-bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d.scope: Deactivated successfully.
Oct 01 13:12:24 compute-0 podman[105608]: 2025-10-01 13:12:24.002580634 +0000 UTC m=+1.253682768 container died bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:12:24 compute-0 systemd[1]: libpod-bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d.scope: Consumed 1.082s CPU time.
Oct 01 13:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-17b90e3228cb2c96059544f6a8e2b9be1f3887d5a3d773d5853efe89bf2beb58-merged.mount: Deactivated successfully.
Oct 01 13:12:24 compute-0 podman[105608]: 2025-10-01 13:12:24.107674095 +0000 UTC m=+1.358776229 container remove bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poitras, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:12:24 compute-0 systemd[1]: libpod-conmon-bdfa1648bfd8a70005bbb4691a61b30f127a81e736441db49538e136b8c4128d.scope: Deactivated successfully.
Oct 01 13:12:24 compute-0 sudo[105501]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:12:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:12:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:12:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:12:24 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev dc0e9743-c86e-4bf8-a157-b1a8b9179b29 does not exist
Oct 01 13:12:24 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e92f1567-f0aa-4d2d-8561-efe1b5c800f1 does not exist
Oct 01 13:12:24 compute-0 sudo[105672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:12:24 compute-0 sudo[105672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:24 compute-0 sudo[105672]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct 01 13:12:24 compute-0 ceph-mon[74802]: 7.1a deep-scrub starts
Oct 01 13:12:24 compute-0 ceph-mon[74802]: 7.1a deep-scrub ok
Oct 01 13:12:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 01 13:12:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:12:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:12:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 01 13:12:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct 01 13:12:24 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct 01 13:12:24 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 87 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=80/81 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.6( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.6( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.7( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.17( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[80,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87 pruub=13.667768478s) [2] r=-1 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.136596680s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87 pruub=13.667716026s) [2] r=-1 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.136596680s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87 pruub=13.667521477s) [2] r=-1 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 157.136795044s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:24 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 87 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87 pruub=13.667475700s) [2] r=-1 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.136795044s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.8( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2] r=0 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 87 pg[9.18( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2] r=0 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:24 compute-0 sudo[105697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:12:24 compute-0 sudo[105697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:12:24 compute-0 sudo[105697]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct 01 13:12:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct 01 13:12:25 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 88 pg[9.8( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 88 pg[9.8( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:25 compute-0 ceph-mon[74802]: pgmap v185: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 01 13:12:25 compute-0 ceph-mon[74802]: osdmap e87: 3 total, 3 up, 3 in
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 88 pg[9.18( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 88 pg[9.18( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 88 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[73,87)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 88 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 88 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 88 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 88 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[80,87)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct 01 13:12:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct 01 13:12:25 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89 pruub=15.858527184s) [2] async=[2] r=-1 lpr=89 pi=[73,89)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.473541260s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89 pruub=15.858265877s) [2] r=-1 lpr=89 pi=[73,89)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.473541260s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89 pruub=15.866624832s) [2] async=[2] r=-1 lpr=89 pi=[73,89)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.482833862s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89 pruub=15.866536140s) [2] r=-1 lpr=89 pi=[73,89)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.482833862s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:25 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 89 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89 pruub=15.858545303s) [2] async=[2] r=-1 lpr=89 pi=[80,89)/1 crt=70'389 mlcod 70'389 active pruub 165.476104736s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 89 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89 pruub=15.858445168s) [2] r=-1 lpr=89 pi=[80,89)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 165.476104736s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89) [2] r=0 lpr=89 pi=[80,89)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 89 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89) [2] r=0 lpr=89 pi=[80,89)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=88/89 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 89 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=88/89 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[73,88)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct 01 13:12:25 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 01 13:12:26 compute-0 ceph-mon[74802]: osdmap e88: 3 total, 3 up, 3 in
Oct 01 13:12:26 compute-0 ceph-mon[74802]: osdmap e89: 3 total, 3 up, 3 in
Oct 01 13:12:26 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 01 13:12:26 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct 01 13:12:26 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct 01 13:12:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct 01 13:12:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 01 13:12:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct 01 13:12:26 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct 01 13:12:26 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.844839096s) [2] async=[2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 70'389 active pruub 165.476242065s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.844753265s) [2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 165.476242065s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:26 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.842909813s) [2] async=[2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 70'389 active pruub 165.476150513s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90 pruub=14.842742920s) [2] async=[2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.483016968s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.842617989s) [2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 165.476150513s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:26 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=88/89 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90 pruub=14.991535187s) [2] async=[2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.632019043s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=88/89 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90 pruub=14.991478920s) [2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.632019043s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:26 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90 pruub=14.842116356s) [2] async=[2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.482788086s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90 pruub=14.842031479s) [2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.482788086s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:26 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=88/89 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90 pruub=14.991071701s) [2] async=[2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 160.632049561s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=88/89 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90 pruub=14.990999222s) [2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.632049561s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:26 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 90 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=87/88 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90 pruub=14.842623711s) [2] r=-1 lpr=90 pi=[73,90)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.483016968s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:26 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.841970444s) [2] async=[2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 70'389 active pruub 165.476837158s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:26 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 90 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=87/88 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90 pruub=14.841842651s) [2] r=-1 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 165.476837158s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=89) [2] r=0 lpr=89 pi=[80,89)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 90 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=89) [2] r=0 lpr=89 pi=[73,89)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:27 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct 01 13:12:27 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct 01 13:12:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct 01 13:12:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct 01 13:12:27 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct 01 13:12:27 compute-0 ceph-mon[74802]: pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:27 compute-0 ceph-mon[74802]: 4.14 scrub starts
Oct 01 13:12:27 compute-0 ceph-mon[74802]: 4.14 scrub ok
Oct 01 13:12:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 01 13:12:27 compute-0 ceph-mon[74802]: osdmap e90: 3 total, 3 up, 3 in
Oct 01 13:12:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.f( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.7( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.8( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.e( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.18( v 70'389 (0'0,70'389] local-lis/les=90/91 n=5 ec=73/41 lis/c=88/73 les/c/f=89/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.17( v 70'389 (0'0,70'389] local-lis/les=90/91 n=5 ec=73/41 lis/c=87/80 les/c/f=88/81/0 sis=90) [2] r=0 lpr=90 pi=[80,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 91 pg[9.6( v 70'389 (0'0,70'389] local-lis/les=90/91 n=6 ec=73/41 lis/c=87/73 les/c/f=88/75/0 sis=90) [2] r=0 lpr=90 pi=[73,90)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 285 B/s, 14 objects/s recovering
Oct 01 13:12:28 compute-0 sshd-session[105722]: Connection closed by authenticating user root 185.156.73.233 port 16772 [preauth]
Oct 01 13:12:28 compute-0 ceph-mon[74802]: 4.10 scrub starts
Oct 01 13:12:28 compute-0 ceph-mon[74802]: 4.10 scrub ok
Oct 01 13:12:28 compute-0 ceph-mon[74802]: osdmap e91: 3 total, 3 up, 3 in
Oct 01 13:12:28 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 01 13:12:28 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 01 13:12:29 compute-0 ceph-mon[74802]: pgmap v192: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 285 B/s, 14 objects/s recovering
Oct 01 13:12:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 221 B/s, 11 objects/s recovering
Oct 01 13:12:30 compute-0 sshd-session[105253]: error: kex_exchange_identification: read: Connection timed out
Oct 01 13:12:30 compute-0 sshd-session[105253]: banner exchange: Connection from 202.103.55.158 port 40478: Connection timed out
Oct 01 13:12:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:30 compute-0 ceph-mon[74802]: 3.1e scrub starts
Oct 01 13:12:30 compute-0 ceph-mon[74802]: 3.1e scrub ok
Oct 01 13:12:31 compute-0 ceph-mon[74802]: pgmap v193: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 221 B/s, 11 objects/s recovering
Oct 01 13:12:31 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Oct 01 13:12:31 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Oct 01 13:12:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 156 B/s, 8 objects/s recovering
Oct 01 13:12:32 compute-0 ceph-mon[74802]: 7.6 deep-scrub starts
Oct 01 13:12:32 compute-0 ceph-mon[74802]: 7.6 deep-scrub ok
Oct 01 13:12:32 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct 01 13:12:32 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct 01 13:12:32 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.1d deep-scrub starts
Oct 01 13:12:32 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.1d deep-scrub ok
Oct 01 13:12:32 compute-0 sshd-session[105724]: Accepted publickey for zuul from 192.168.122.30 port 56882 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:12:32 compute-0 systemd-logind[818]: New session 35 of user zuul.
Oct 01 13:12:33 compute-0 systemd[1]: Started Session 35 of User zuul.
Oct 01 13:12:33 compute-0 sshd-session[105724]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:12:33 compute-0 ceph-mon[74802]: pgmap v194: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 156 B/s, 8 objects/s recovering
Oct 01 13:12:33 compute-0 ceph-mon[74802]: 7.3 scrub starts
Oct 01 13:12:33 compute-0 ceph-mon[74802]: 7.3 scrub ok
Oct 01 13:12:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 123 B/s, 6 objects/s recovering
Oct 01 13:12:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct 01 13:12:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 01 13:12:33 compute-0 python3.9[105877]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 01 13:12:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct 01 13:12:34 compute-0 ceph-mon[74802]: 3.1d deep-scrub starts
Oct 01 13:12:34 compute-0 ceph-mon[74802]: 3.1d deep-scrub ok
Oct 01 13:12:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 01 13:12:34 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 01 13:12:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct 01 13:12:34 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct 01 13:12:34 compute-0 sshd-session[105954]: Invalid user user from 80.253.31.232 port 56360
Oct 01 13:12:34 compute-0 python3.9[106053]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:12:34 compute-0 sshd-session[105954]: Received disconnect from 80.253.31.232 port 56360:11: Bye Bye [preauth]
Oct 01 13:12:34 compute-0 sshd-session[105954]: Disconnected from invalid user user 80.253.31.232 port 56360 [preauth]
Oct 01 13:12:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:35 compute-0 ceph-mon[74802]: pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 123 B/s, 6 objects/s recovering
Oct 01 13:12:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 01 13:12:35 compute-0 ceph-mon[74802]: osdmap e92: 3 total, 3 up, 3 in
Oct 01 13:12:35 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct 01 13:12:35 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct 01 13:12:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct 01 13:12:35 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 01 13:12:35 compute-0 sudo[106207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daqhwdkwzfvfgdyndmyxcpujgskojzmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324355.3296883-45-262832080657594/AnsiballZ_command.py'
Oct 01 13:12:35 compute-0 sudo[106207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:12:36 compute-0 python3.9[106209]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:12:36 compute-0 sudo[106207]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct 01 13:12:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 01 13:12:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct 01 13:12:36 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct 01 13:12:36 compute-0 ceph-mon[74802]: 7.f scrub starts
Oct 01 13:12:36 compute-0 ceph-mon[74802]: 7.f scrub ok
Oct 01 13:12:36 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 01 13:12:36 compute-0 sudo[106360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqrahfhomkcvysknbuywhlmxwrvtqzbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324356.3296723-57-272196461013988/AnsiballZ_stat.py'
Oct 01 13:12:36 compute-0 sudo[106360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:12:37 compute-0 python3.9[106362]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:12:37 compute-0 sudo[106360]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:37 compute-0 PackageKit[31618]: daemon quit
Oct 01 13:12:37 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 01 13:12:37 compute-0 ceph-mon[74802]: pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 01 13:12:37 compute-0 ceph-mon[74802]: osdmap e93: 3 total, 3 up, 3 in
Oct 01 13:12:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct 01 13:12:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 01 13:12:37 compute-0 sudo[106514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qisvnjwwyhgtxiqpwxwztgxogemspcsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324357.3267608-68-255863074573368/AnsiballZ_file.py'
Oct 01 13:12:37 compute-0 sudo[106514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:12:37 compute-0 python3.9[106516]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:12:38 compute-0 sudo[106514]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct 01 13:12:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 01 13:12:38 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 01 13:12:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct 01 13:12:38 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct 01 13:12:38 compute-0 python3.9[106666]: ansible-ansible.builtin.service_facts Invoked
Oct 01 13:12:38 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct 01 13:12:38 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct 01 13:12:38 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 94 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94 pruub=14.989083290s) [2] r=-1 lpr=94 pi=[73,94)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 173.135879517s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:38 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 94 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94 pruub=14.989028931s) [2] r=-1 lpr=94 pi=[73,94)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.135879517s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:38 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 94 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94 pruub=14.989823341s) [2] r=-1 lpr=94 pi=[73,94)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 173.137756348s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:38 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 94 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94 pruub=14.989728928s) [2] r=-1 lpr=94 pi=[73,94)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.137756348s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:38 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 94 pg[9.c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94) [2] r=0 lpr=94 pi=[73,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:38 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 94 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=94) [2] r=0 lpr=94 pi=[73,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:39 compute-0 network[106683]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:12:39 compute-0 network[106684]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:12:39 compute-0 network[106685]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:12:39 compute-0 ceph-mon[74802]: pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:39 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 01 13:12:39 compute-0 ceph-mon[74802]: osdmap e94: 3 total, 3 up, 3 in
Oct 01 13:12:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct 01 13:12:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct 01 13:12:39 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct 01 13:12:39 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 01 13:12:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct 01 13:12:39 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 01 13:12:39 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 95 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=-1 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:39 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 95 pg[9.c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=-1 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:39 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 95 pg[9.c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=-1 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:39 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 95 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=-1 lpr=95 pi=[73,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:39 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 01 13:12:39 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 95 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:39 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 95 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:39 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 95 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:39 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 95 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=73/75 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:40 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct 01 13:12:40 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct 01 13:12:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct 01 13:12:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 01 13:12:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct 01 13:12:40 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct 01 13:12:40 compute-0 ceph-mon[74802]: 7.e scrub starts
Oct 01 13:12:40 compute-0 ceph-mon[74802]: 7.e scrub ok
Oct 01 13:12:40 compute-0 ceph-mon[74802]: osdmap e95: 3 total, 3 up, 3 in
Oct 01 13:12:40 compute-0 ceph-mon[74802]: 3.9 scrub starts
Oct 01 13:12:40 compute-0 ceph-mon[74802]: pgmap v202: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 01 13:12:40 compute-0 ceph-mon[74802]: 3.9 scrub ok
Oct 01 13:12:40 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 96 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=6 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] async=[2] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:40 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 96 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=5 ec=73/41 lis/c=73/73 les/c/f=75/75/0 sis=95) [2]/[1] async=[2] r=0 lpr=95 pi=[73,95)/1 crt=70'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:41 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct 01 13:12:41 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct 01 13:12:41 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 01 13:12:41 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 01 13:12:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct 01 13:12:41 compute-0 ceph-mon[74802]: 3.17 scrub starts
Oct 01 13:12:41 compute-0 ceph-mon[74802]: 3.17 scrub ok
Oct 01 13:12:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 01 13:12:41 compute-0 ceph-mon[74802]: osdmap e96: 3 total, 3 up, 3 in
Oct 01 13:12:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct 01 13:12:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 01 13:12:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct 01 13:12:41 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct 01 13:12:41 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 97 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:41 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 97 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:41 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 97 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:41 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 97 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:41 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 97 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97 pruub=14.968115807s) [2] async=[2] r=-1 lpr=97 pi=[73,97)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 175.916351318s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:41 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 97 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97 pruub=14.968032837s) [2] r=-1 lpr=97 pi=[73,97)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.916351318s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:41 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 97 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97 pruub=14.970879555s) [2] async=[2] r=-1 lpr=97 pi=[73,97)/1 crt=70'389 lcod 0'0 mlcod 0'0 active pruub 175.919281006s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:41 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 97 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=95/96 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97 pruub=14.970707893s) [2] r=-1 lpr=97 pi=[73,97)/1 crt=70'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.919281006s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct 01 13:12:42 compute-0 ceph-mon[74802]: 5.c scrub starts
Oct 01 13:12:42 compute-0 ceph-mon[74802]: 5.c scrub ok
Oct 01 13:12:42 compute-0 ceph-mon[74802]: 3.15 scrub starts
Oct 01 13:12:42 compute-0 ceph-mon[74802]: 3.15 scrub ok
Oct 01 13:12:42 compute-0 ceph-mon[74802]: pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 01 13:12:42 compute-0 ceph-mon[74802]: osdmap e97: 3 total, 3 up, 3 in
Oct 01 13:12:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 01 13:12:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct 01 13:12:42 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct 01 13:12:42 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 98 pg[9.c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=6 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:42 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 98 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=95/73 les/c/f=96/75/0 sis=97) [2] r=0 lpr=97 pi=[73,97)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:43 compute-0 python3.9[106948]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:12:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 01 13:12:43 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 01 13:12:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 01 13:12:43 compute-0 ceph-mon[74802]: osdmap e98: 3 total, 3 up, 3 in
Oct 01 13:12:43 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 01 13:12:44 compute-0 python3.9[107098]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:12:44 compute-0 ceph-mon[74802]: pgmap v207: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Oct 01 13:12:44 compute-0 ceph-mon[74802]: 7.c scrub starts
Oct 01 13:12:44 compute-0 ceph-mon[74802]: 7.c scrub ok
Oct 01 13:12:44 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct 01 13:12:44 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct 01 13:12:45 compute-0 python3.9[107252]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:12:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:45 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Oct 01 13:12:45 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Oct 01 13:12:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 01 13:12:45 compute-0 ceph-mon[74802]: 3.8 scrub starts
Oct 01 13:12:45 compute-0 ceph-mon[74802]: 3.8 scrub ok
Oct 01 13:12:45 compute-0 ceph-mon[74802]: 7.13 deep-scrub starts
Oct 01 13:12:45 compute-0 ceph-mon[74802]: 7.13 deep-scrub ok
Oct 01 13:12:46 compute-0 sudo[107408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daonckxwogierxpfcwefrsypzhqzlaez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324365.9158604-116-153107414258685/AnsiballZ_setup.py'
Oct 01 13:12:46 compute-0 sudo[107408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:12:46 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 01 13:12:46 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 01 13:12:46 compute-0 python3.9[107410]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:12:46 compute-0 sudo[107408]: pam_unix(sudo:session): session closed for user root
Oct 01 13:12:46 compute-0 ceph-mon[74802]: pgmap v208: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 01 13:12:46 compute-0 ceph-mon[74802]: 5.f scrub starts
Oct 01 13:12:46 compute-0 ceph-mon[74802]: 5.f scrub ok
Oct 01 13:12:47 compute-0 sudo[107492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wulidyldbtlmmxmsbqqvbojoaqkhnxog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324365.9158604-116-153107414258685/AnsiballZ_dnf.py'
Oct 01 13:12:47 compute-0 sudo[107492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:12:47 compute-0 python3.9[107494]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:12:47 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 01 13:12:47 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:12:47
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Some PGs (0.006557) are inactive; try again later
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Oct 01 13:12:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct 01 13:12:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:12:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:12:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct 01 13:12:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 01 13:12:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct 01 13:12:47 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct 01 13:12:47 compute-0 ceph-mon[74802]: 3.1f scrub starts
Oct 01 13:12:47 compute-0 ceph-mon[74802]: 3.1f scrub ok
Oct 01 13:12:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 01 13:12:48 compute-0 ceph-mon[74802]: pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Oct 01 13:12:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 01 13:12:48 compute-0 ceph-mon[74802]: osdmap e99: 3 total, 3 up, 3 in
Oct 01 13:12:48 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct 01 13:12:48 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct 01 13:12:49 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 01 13:12:49 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 01 13:12:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Oct 01 13:12:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct 01 13:12:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 01 13:12:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct 01 13:12:49 compute-0 ceph-mon[74802]: 3.7 scrub starts
Oct 01 13:12:49 compute-0 ceph-mon[74802]: 3.7 scrub ok
Oct 01 13:12:49 compute-0 ceph-mon[74802]: 2.a scrub starts
Oct 01 13:12:49 compute-0 ceph-mon[74802]: 2.a scrub ok
Oct 01 13:12:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 01 13:12:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 01 13:12:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct 01 13:12:49 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct 01 13:12:50 compute-0 sshd-session[107529]: Invalid user jhall from 27.254.137.144 port 54296
Oct 01 13:12:50 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 01 13:12:50 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 01 13:12:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:50 compute-0 sshd-session[107529]: Received disconnect from 27.254.137.144 port 54296:11: Bye Bye [preauth]
Oct 01 13:12:50 compute-0 sshd-session[107529]: Disconnected from invalid user jhall 27.254.137.144 port 54296 [preauth]
Oct 01 13:12:50 compute-0 ceph-mon[74802]: pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Oct 01 13:12:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 01 13:12:50 compute-0 ceph-mon[74802]: osdmap e100: 3 total, 3 up, 3 in
Oct 01 13:12:50 compute-0 ceph-mon[74802]: 2.9 scrub starts
Oct 01 13:12:50 compute-0 ceph-mon[74802]: 2.9 scrub ok
Oct 01 13:12:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct 01 13:12:51 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 01 13:12:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct 01 13:12:51 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 01 13:12:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct 01 13:12:52 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct 01 13:12:52 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 01 13:12:52 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 01 13:12:52 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 01 13:12:52 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 01 13:12:52 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 01 13:12:53 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 01 13:12:53 compute-0 ceph-mon[74802]: pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:53 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 01 13:12:53 compute-0 ceph-mon[74802]: osdmap e101: 3 total, 3 up, 3 in
Oct 01 13:12:53 compute-0 ceph-mon[74802]: 5.1a scrub starts
Oct 01 13:12:53 compute-0 ceph-mon[74802]: 5.1a scrub ok
Oct 01 13:12:53 compute-0 ceph-mon[74802]: 3.a scrub starts
Oct 01 13:12:53 compute-0 ceph-mon[74802]: 3.a scrub ok
Oct 01 13:12:53 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 01 13:12:53 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct 01 13:12:53 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct 01 13:12:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct 01 13:12:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 01 13:12:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct 01 13:12:54 compute-0 ceph-mon[74802]: 7.1 scrub starts
Oct 01 13:12:54 compute-0 ceph-mon[74802]: 7.1 scrub ok
Oct 01 13:12:54 compute-0 ceph-mon[74802]: 5.19 scrub starts
Oct 01 13:12:54 compute-0 ceph-mon[74802]: 5.19 scrub ok
Oct 01 13:12:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 01 13:12:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 01 13:12:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct 01 13:12:54 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct 01 13:12:55 compute-0 ceph-mon[74802]: pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 01 13:12:55 compute-0 ceph-mon[74802]: osdmap e102: 3 total, 3 up, 3 in
Oct 01 13:12:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:12:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct 01 13:12:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 01 13:12:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct 01 13:12:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 01 13:12:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 01 13:12:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct 01 13:12:56 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct 01 13:12:56 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct 01 13:12:56 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct 01 13:12:56 compute-0 sshd-session[107566]: Invalid user seekcy from 156.236.31.46 port 43996
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:12:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:12:56 compute-0 sshd-session[107566]: Received disconnect from 156.236.31.46 port 43996:11: Bye Bye [preauth]
Oct 01 13:12:56 compute-0 sshd-session[107566]: Disconnected from invalid user seekcy 156.236.31.46 port 43996 [preauth]
Oct 01 13:12:57 compute-0 ceph-mon[74802]: pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 01 13:12:57 compute-0 ceph-mon[74802]: osdmap e103: 3 total, 3 up, 3 in
Oct 01 13:12:57 compute-0 ceph-mon[74802]: 5.18 deep-scrub starts
Oct 01 13:12:57 compute-0 ceph-mon[74802]: 5.18 deep-scrub ok
Oct 01 13:12:57 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 103 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=103 pruub=15.010645866s) [2] r=-1 lpr=103 pi=[80,103)/1 crt=70'389 mlcod 0'0 active pruub 196.261871338s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:57 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 103 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=103 pruub=15.010584831s) [2] r=-1 lpr=103 pi=[80,103)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 196.261871338s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:57 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 103 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=103) [2] r=0 lpr=103 pi=[80,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct 01 13:12:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct 01 13:12:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct 01 13:12:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 104 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[80,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:58 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 104 pg[9.13( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] r=-1 lpr=104 pi=[80,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:12:58 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 104 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] r=0 lpr=104 pi=[80,104)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:12:58 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 104 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] r=0 lpr=104 pi=[80,104)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:12:59 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct 01 13:12:59 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct 01 13:12:59 compute-0 ceph-mon[74802]: pgmap v219: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:59 compute-0 ceph-mon[74802]: osdmap e104: 3 total, 3 up, 3 in
Oct 01 13:12:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct 01 13:12:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct 01 13:12:59 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct 01 13:12:59 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct 01 13:12:59 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 105 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=104/105 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=104) [2]/[0] async=[2] r=0 lpr=104 pi=[80,104)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:12:59 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct 01 13:12:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:12:59 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct 01 13:12:59 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct 01 13:13:00 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Oct 01 13:13:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct 01 13:13:00 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Oct 01 13:13:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct 01 13:13:00 compute-0 ceph-mon[74802]: 7.2 scrub starts
Oct 01 13:13:00 compute-0 ceph-mon[74802]: 7.2 scrub ok
Oct 01 13:13:00 compute-0 ceph-mon[74802]: osdmap e105: 3 total, 3 up, 3 in
Oct 01 13:13:00 compute-0 ceph-mon[74802]: 4.f scrub starts
Oct 01 13:13:00 compute-0 ceph-mon[74802]: 4.f scrub ok
Oct 01 13:13:00 compute-0 ceph-mon[74802]: 7.1b scrub starts
Oct 01 13:13:00 compute-0 ceph-mon[74802]: 7.1b scrub ok
Oct 01 13:13:00 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 106 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=104/105 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106 pruub=15.436095238s) [2] async=[2] r=-1 lpr=106 pi=[80,106)/1 crt=70'389 mlcod 70'389 active pruub 199.695755005s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:00 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct 01 13:13:00 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 106 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=104/105 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106 pruub=15.435786247s) [2] r=-1 lpr=106 pi=[80,106)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 199.695755005s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:00 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 106 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106) [2] r=0 lpr=106 pi=[80,106)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:00 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 106 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106) [2] r=0 lpr=106 pi=[80,106)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:00 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Oct 01 13:13:00 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Oct 01 13:13:01 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 01 13:13:01 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 01 13:13:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct 01 13:13:01 compute-0 ceph-mon[74802]: pgmap v222: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:01 compute-0 ceph-mon[74802]: 7.8 deep-scrub starts
Oct 01 13:13:01 compute-0 ceph-mon[74802]: 7.8 deep-scrub ok
Oct 01 13:13:01 compute-0 ceph-mon[74802]: osdmap e106: 3 total, 3 up, 3 in
Oct 01 13:13:01 compute-0 ceph-mon[74802]: 6.1 deep-scrub starts
Oct 01 13:13:01 compute-0 ceph-mon[74802]: 6.1 deep-scrub ok
Oct 01 13:13:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct 01 13:13:01 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct 01 13:13:01 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 107 pg[9.13( v 70'389 (0'0,70'389] local-lis/les=106/107 n=5 ec=73/41 lis/c=104/80 les/c/f=105/81/0 sis=106) [2] r=0 lpr=106 pi=[80,106)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:02 compute-0 ceph-mon[74802]: 7.5 scrub starts
Oct 01 13:13:02 compute-0 ceph-mon[74802]: 7.5 scrub ok
Oct 01 13:13:02 compute-0 ceph-mon[74802]: osdmap e107: 3 total, 3 up, 3 in
Oct 01 13:13:02 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 01 13:13:02 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 01 13:13:03 compute-0 sshd-session[107565]: error: kex_exchange_identification: read: Connection timed out
Oct 01 13:13:03 compute-0 sshd-session[107565]: banner exchange: Connection from 202.103.55.158 port 47436: Connection timed out
Oct 01 13:13:03 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 01 13:13:03 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 01 13:13:03 compute-0 ceph-mon[74802]: pgmap v225: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:03 compute-0 ceph-mon[74802]: 4.9 scrub starts
Oct 01 13:13:03 compute-0 ceph-mon[74802]: 4.9 scrub ok
Oct 01 13:13:03 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 01 13:13:03 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 01 13:13:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 179 B/s wr, 6 op/s; 38 B/s, 1 objects/s recovering
Oct 01 13:13:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct 01 13:13:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 01 13:13:04 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 01 13:13:04 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 01 13:13:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct 01 13:13:04 compute-0 ceph-mon[74802]: 3.e scrub starts
Oct 01 13:13:04 compute-0 ceph-mon[74802]: 3.e scrub ok
Oct 01 13:13:04 compute-0 ceph-mon[74802]: 4.d scrub starts
Oct 01 13:13:04 compute-0 ceph-mon[74802]: 4.d scrub ok
Oct 01 13:13:04 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 01 13:13:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 01 13:13:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct 01 13:13:04 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct 01 13:13:04 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct 01 13:13:04 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct 01 13:13:05 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 01 13:13:05 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 01 13:13:05 compute-0 ceph-mon[74802]: pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 179 B/s wr, 6 op/s; 38 B/s, 1 objects/s recovering
Oct 01 13:13:05 compute-0 ceph-mon[74802]: 7.15 scrub starts
Oct 01 13:13:05 compute-0 ceph-mon[74802]: 7.15 scrub ok
Oct 01 13:13:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 01 13:13:05 compute-0 ceph-mon[74802]: osdmap e108: 3 total, 3 up, 3 in
Oct 01 13:13:05 compute-0 ceph-mon[74802]: 4.7 scrub starts
Oct 01 13:13:05 compute-0 ceph-mon[74802]: 4.7 scrub ok
Oct 01 13:13:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct 01 13:13:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct 01 13:13:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 01 13:13:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct 01 13:13:06 compute-0 ceph-mon[74802]: 3.11 scrub starts
Oct 01 13:13:06 compute-0 ceph-mon[74802]: 3.11 scrub ok
Oct 01 13:13:06 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 01 13:13:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 01 13:13:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct 01 13:13:06 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct 01 13:13:06 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 109 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=109 pruub=13.606899261s) [1] r=-1 lpr=109 pi=[80,109)/1 crt=70'389 mlcod 0'0 active pruub 204.255828857s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:06 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 109 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=109 pruub=13.606764793s) [1] r=-1 lpr=109 pi=[80,109)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 204.255828857s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:06 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 109 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=109) [1] r=0 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct 01 13:13:07 compute-0 ceph-mon[74802]: pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct 01 13:13:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 01 13:13:07 compute-0 ceph-mon[74802]: osdmap e109: 3 total, 3 up, 3 in
Oct 01 13:13:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct 01 13:13:07 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct 01 13:13:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 110 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] r=-1 lpr=110 pi=[80,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:07 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 110 pg[9.15( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] r=-1 lpr=110 pi=[80,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 110 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] r=0 lpr=110 pi=[80,110)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:07 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 110 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] r=0 lpr=110 pi=[80,110)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:07 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct 01 13:13:07 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct 01 13:13:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 01 13:13:08 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 01 13:13:08 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 01 13:13:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct 01 13:13:08 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct 01 13:13:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct 01 13:13:08 compute-0 ceph-mon[74802]: osdmap e110: 3 total, 3 up, 3 in
Oct 01 13:13:08 compute-0 ceph-mon[74802]: 2.17 scrub starts
Oct 01 13:13:08 compute-0 ceph-mon[74802]: 2.17 scrub ok
Oct 01 13:13:08 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct 01 13:13:08 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct 01 13:13:08 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 111 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=110/111 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=110) [1]/[0] async=[1] r=0 lpr=110 pi=[80,110)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:09 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct 01 13:13:09 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct 01 13:13:09 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 01 13:13:09 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 01 13:13:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct 01 13:13:09 compute-0 ceph-mon[74802]: pgmap v231: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct 01 13:13:09 compute-0 ceph-mon[74802]: 7.11 scrub starts
Oct 01 13:13:09 compute-0 ceph-mon[74802]: 7.11 scrub ok
Oct 01 13:13:09 compute-0 ceph-mon[74802]: 4.12 scrub starts
Oct 01 13:13:09 compute-0 ceph-mon[74802]: 4.12 scrub ok
Oct 01 13:13:09 compute-0 ceph-mon[74802]: osdmap e111: 3 total, 3 up, 3 in
Oct 01 13:13:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct 01 13:13:09 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct 01 13:13:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 112 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112) [1] r=0 lpr=112 pi=[80,112)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:09 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 112 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112) [1] r=0 lpr=112 pi=[80,112)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 112 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=110/111 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112 pruub=15.016909599s) [1] async=[1] r=-1 lpr=112 pi=[80,112)/1 crt=70'389 mlcod 70'389 active pruub 208.746765137s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:09 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 112 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=110/111 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112 pruub=15.016639709s) [1] r=-1 lpr=112 pi=[80,112)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 208.746765137s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:10 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 01 13:13:10 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 01 13:13:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct 01 13:13:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct 01 13:13:10 compute-0 ceph-mon[74802]: 3.16 scrub starts
Oct 01 13:13:10 compute-0 ceph-mon[74802]: 3.16 scrub ok
Oct 01 13:13:10 compute-0 ceph-mon[74802]: 4.8 scrub starts
Oct 01 13:13:10 compute-0 ceph-mon[74802]: 4.8 scrub ok
Oct 01 13:13:10 compute-0 ceph-mon[74802]: osdmap e112: 3 total, 3 up, 3 in
Oct 01 13:13:10 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct 01 13:13:10 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 113 pg[9.15( v 70'389 (0'0,70'389] local-lis/les=112/113 n=5 ec=73/41 lis/c=110/80 les/c/f=111/81/0 sis=112) [1] r=0 lpr=112 pi=[80,112)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:11 compute-0 ceph-mon[74802]: pgmap v234: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:11 compute-0 ceph-mon[74802]: 6.2 scrub starts
Oct 01 13:13:11 compute-0 ceph-mon[74802]: 6.2 scrub ok
Oct 01 13:13:11 compute-0 ceph-mon[74802]: osdmap e113: 3 total, 3 up, 3 in
Oct 01 13:13:11 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct 01 13:13:11 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct 01 13:13:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:11 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 01 13:13:11 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 01 13:13:12 compute-0 ceph-mon[74802]: 3.12 scrub starts
Oct 01 13:13:12 compute-0 ceph-mon[74802]: 3.12 scrub ok
Oct 01 13:13:13 compute-0 ceph-mon[74802]: pgmap v236: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:13 compute-0 ceph-mon[74802]: 3.18 scrub starts
Oct 01 13:13:13 compute-0 ceph-mon[74802]: 3.18 scrub ok
Oct 01 13:13:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct 01 13:13:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct 01 13:13:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 01 13:13:13 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 01 13:13:13 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 01 13:13:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct 01 13:13:14 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 01 13:13:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 01 13:13:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct 01 13:13:14 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct 01 13:13:14 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 114 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=114 pruub=15.651785851s) [0] r=-1 lpr=114 pi=[89,114)/1 crt=70'389 mlcod 0'0 active pruub 204.694458008s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:14 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 114 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=114 pruub=15.651704788s) [0] r=-1 lpr=114 pi=[89,114)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 204.694458008s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:14 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=114) [0] r=0 lpr=114 pi=[89,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:14 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 01 13:13:14 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 01 13:13:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct 01 13:13:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct 01 13:13:15 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct 01 13:13:15 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 115 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=0 lpr=115 pi=[89,115)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:15 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 115 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=0 lpr=115 pi=[89,115)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:15 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 115 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[89,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:15 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 115 pg[9.16( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[89,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:15 compute-0 ceph-mon[74802]: pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct 01 13:13:15 compute-0 ceph-mon[74802]: 7.1c scrub starts
Oct 01 13:13:15 compute-0 ceph-mon[74802]: 7.1c scrub ok
Oct 01 13:13:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 01 13:13:15 compute-0 ceph-mon[74802]: osdmap e114: 3 total, 3 up, 3 in
Oct 01 13:13:15 compute-0 ceph-mon[74802]: osdmap e115: 3 total, 3 up, 3 in
Oct 01 13:13:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct 01 13:13:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct 01 13:13:15 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 01 13:13:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct 01 13:13:16 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 01 13:13:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct 01 13:13:16 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct 01 13:13:16 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct 01 13:13:16 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct 01 13:13:16 compute-0 ceph-mon[74802]: 7.a scrub starts
Oct 01 13:13:16 compute-0 ceph-mon[74802]: 7.a scrub ok
Oct 01 13:13:16 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 01 13:13:16 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 01 13:13:16 compute-0 ceph-mon[74802]: osdmap e116: 3 total, 3 up, 3 in
Oct 01 13:13:16 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 116 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=115/116 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[89,115)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:17 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 01 13:13:17 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 01 13:13:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct 01 13:13:17 compute-0 ceph-mon[74802]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct 01 13:13:17 compute-0 ceph-mon[74802]: 6.6 scrub starts
Oct 01 13:13:17 compute-0 ceph-mon[74802]: 6.6 scrub ok
Oct 01 13:13:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct 01 13:13:17 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct 01 13:13:17 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 117 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:17 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 117 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:17 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 117 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=115/116 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117 pruub=15.109450340s) [0] async=[0] r=-1 lpr=117 pi=[89,117)/1 crt=70'389 mlcod 70'389 active pruub 206.975601196s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:17 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 117 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=115/116 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117 pruub=15.109361649s) [0] r=-1 lpr=117 pi=[89,117)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 206.975601196s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:13:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 01 13:13:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct 01 13:13:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 01 13:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:13:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct 01 13:13:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 01 13:13:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct 01 13:13:18 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct 01 13:13:18 compute-0 ceph-mon[74802]: 6.e scrub starts
Oct 01 13:13:18 compute-0 ceph-mon[74802]: 6.e scrub ok
Oct 01 13:13:18 compute-0 ceph-mon[74802]: osdmap e117: 3 total, 3 up, 3 in
Oct 01 13:13:18 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 01 13:13:18 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 118 pg[9.16( v 70'389 (0'0,70'389] local-lis/les=117/118 n=5 ec=73/41 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:18 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 01 13:13:18 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 01 13:13:19 compute-0 ceph-mon[74802]: pgmap v243: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 01 13:13:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 01 13:13:19 compute-0 ceph-mon[74802]: osdmap e118: 3 total, 3 up, 3 in
Oct 01 13:13:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 0 objects/s recovering
Oct 01 13:13:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct 01 13:13:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 01 13:13:19 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 01 13:13:19 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 01 13:13:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct 01 13:13:20 compute-0 ceph-mon[74802]: 4.18 scrub starts
Oct 01 13:13:20 compute-0 ceph-mon[74802]: 4.18 scrub ok
Oct 01 13:13:20 compute-0 ceph-mon[74802]: pgmap v245: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 0 objects/s recovering
Oct 01 13:13:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 01 13:13:20 compute-0 ceph-mon[74802]: 6.7 scrub starts
Oct 01 13:13:20 compute-0 ceph-mon[74802]: 6.7 scrub ok
Oct 01 13:13:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 01 13:13:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct 01 13:13:20 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct 01 13:13:20 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 01 13:13:20 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 01 13:13:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 01 13:13:21 compute-0 ceph-mon[74802]: osdmap e119: 3 total, 3 up, 3 in
Oct 01 13:13:21 compute-0 ceph-mon[74802]: 6.3 scrub starts
Oct 01 13:13:21 compute-0 ceph-mon[74802]: 6.3 scrub ok
Oct 01 13:13:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Oct 01 13:13:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct 01 13:13:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 01 13:13:21 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Oct 01 13:13:21 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Oct 01 13:13:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct 01 13:13:22 compute-0 ceph-mon[74802]: pgmap v247: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Oct 01 13:13:22 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 01 13:13:22 compute-0 ceph-mon[74802]: 6.5 deep-scrub starts
Oct 01 13:13:22 compute-0 ceph-mon[74802]: 6.5 deep-scrub ok
Oct 01 13:13:22 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 01 13:13:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct 01 13:13:22 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 01 13:13:22 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 01 13:13:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct 01 13:13:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 119 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=119 pruub=13.049392700s) [2] r=-1 lpr=119 pi=[80,119)/1 crt=70'389 mlcod 0'0 active pruub 220.262161255s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 120 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=119 pruub=13.049277306s) [2] r=-1 lpr=119 pi=[80,119)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 220.262161255s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 120 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=119) [2] r=0 lpr=120 pi=[80,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct 01 13:13:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 01 13:13:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct 01 13:13:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 01 13:13:23 compute-0 ceph-mon[74802]: 4.13 scrub starts
Oct 01 13:13:23 compute-0 ceph-mon[74802]: 4.13 scrub ok
Oct 01 13:13:23 compute-0 ceph-mon[74802]: osdmap e120: 3 total, 3 up, 3 in
Oct 01 13:13:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 01 13:13:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 01 13:13:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct 01 13:13:23 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct 01 13:13:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 121 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] r=-1 lpr=121 pi=[80,121)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:23 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 121 pg[9.19( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] r=-1 lpr=121 pi=[80,121)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 121 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] r=0 lpr=121 pi=[80,121)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:23 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 121 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=80/81 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] r=0 lpr=121 pi=[80,121)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:24 compute-0 sudo[107646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:13:24 compute-0 sudo[107646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:24 compute-0 sudo[107646]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:24 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct 01 13:13:24 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct 01 13:13:24 compute-0 sudo[107671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:13:24 compute-0 sudo[107671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:24 compute-0 sudo[107671]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:24 compute-0 sudo[107696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:13:24 compute-0 sudo[107696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:24 compute-0 sudo[107696]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:24 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct 01 13:13:24 compute-0 sudo[107721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:13:24 compute-0 sudo[107721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:24 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct 01 13:13:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct 01 13:13:24 compute-0 ceph-mon[74802]: pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 01 13:13:24 compute-0 ceph-mon[74802]: osdmap e121: 3 total, 3 up, 3 in
Oct 01 13:13:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct 01 13:13:24 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct 01 13:13:24 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 122 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=121/122 n=5 ec=73/41 lis/c=80/80 les/c/f=81/81/0 sis=121) [2]/[0] async=[2] r=0 lpr=121 pi=[80,121)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:25 compute-0 sudo[107721]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:13:25 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:13:25 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:13:25 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:13:25 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 133d785c-70a1-4e52-b8b6-c6d9dc4bf703 does not exist
Oct 01 13:13:25 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2ce3e324-bc9b-4970-844a-698f8e615679 does not exist
Oct 01 13:13:25 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e8c75221-df56-4f6c-9fbe-352332bcfad6 does not exist
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:13:25 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:13:25 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:13:25 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:13:25 compute-0 sudo[107778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:13:25 compute-0 sudo[107778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:25 compute-0 sudo[107778]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct 01 13:13:25 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct 01 13:13:25 compute-0 sudo[107803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:13:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 123 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123) [2] r=0 lpr=123 pi=[80,123)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:25 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 123 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123) [2] r=0 lpr=123 pi=[80,123)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:25 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 123 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=121/122 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123 pruub=15.423336029s) [2] async=[2] r=-1 lpr=123 pi=[80,123)/1 crt=70'389 mlcod 70'389 active pruub 225.076110840s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:25 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 123 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=121/122 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123 pruub=15.423220634s) [2] r=-1 lpr=123 pi=[80,123)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 225.076110840s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:25 compute-0 sudo[107803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:25 compute-0 sudo[107803]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:25 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct 01 13:13:25 compute-0 sudo[107828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:13:25 compute-0 sudo[107828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:25 compute-0 sudo[107828]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:25 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct 01 13:13:25 compute-0 sudo[107853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:13:25 compute-0 sudo[107853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:25 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 01 13:13:25 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 01 13:13:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct 01 13:13:25 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: 6.c scrub starts
Oct 01 13:13:25 compute-0 ceph-mon[74802]: 6.c scrub ok
Oct 01 13:13:25 compute-0 ceph-mon[74802]: 6.9 scrub starts
Oct 01 13:13:25 compute-0 ceph-mon[74802]: 6.9 scrub ok
Oct 01 13:13:25 compute-0 ceph-mon[74802]: osdmap e122: 3 total, 3 up, 3 in
Oct 01 13:13:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:13:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:13:25 compute-0 ceph-mon[74802]: osdmap e123: 3 total, 3 up, 3 in
Oct 01 13:13:25 compute-0 ceph-mon[74802]: 6.4 scrub starts
Oct 01 13:13:25 compute-0 ceph-mon[74802]: 6.4 scrub ok
Oct 01 13:13:25 compute-0 ceph-mon[74802]: 6.a scrub starts
Oct 01 13:13:25 compute-0 ceph-mon[74802]: 6.a scrub ok
Oct 01 13:13:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 01 13:13:26 compute-0 podman[107920]: 2025-10-01 13:13:26.033982346 +0000 UTC m=+0.062652162 container create 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:13:26 compute-0 systemd[1]: Started libpod-conmon-47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7.scope.
Oct 01 13:13:26 compute-0 podman[107920]: 2025-10-01 13:13:26.007201302 +0000 UTC m=+0.035871198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:13:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:13:26 compute-0 podman[107920]: 2025-10-01 13:13:26.136535892 +0000 UTC m=+0.165205748 container init 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:13:26 compute-0 podman[107920]: 2025-10-01 13:13:26.15043746 +0000 UTC m=+0.179107276 container start 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:13:26 compute-0 podman[107920]: 2025-10-01 13:13:26.153926222 +0000 UTC m=+0.182596118 container attach 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:13:26 compute-0 pensive_tharp[107936]: 167 167
Oct 01 13:13:26 compute-0 systemd[1]: libpod-47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7.scope: Deactivated successfully.
Oct 01 13:13:26 compute-0 podman[107920]: 2025-10-01 13:13:26.160040019 +0000 UTC m=+0.188709835 container died 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 13:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5608b8270fb9d29d155b716a070243cc772e5a7bfe49108dc6388bf0accfe884-merged.mount: Deactivated successfully.
Oct 01 13:13:26 compute-0 podman[107920]: 2025-10-01 13:13:26.205329279 +0000 UTC m=+0.233999095 container remove 47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:13:26 compute-0 systemd[1]: libpod-conmon-47d64db6ed24bcc1c8223177cb26862ffaf6b20c5a80463fa1825ef6808d3fb7.scope: Deactivated successfully.
Oct 01 13:13:26 compute-0 podman[107960]: 2025-10-01 13:13:26.404134889 +0000 UTC m=+0.059122878 container create a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:13:26 compute-0 systemd[1]: Started libpod-conmon-a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf.scope.
Oct 01 13:13:26 compute-0 podman[107960]: 2025-10-01 13:13:26.375081602 +0000 UTC m=+0.030069651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:13:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct 01 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 01 13:13:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct 01 13:13:26 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct 01 13:13:26 compute-0 podman[107960]: 2025-10-01 13:13:26.519694735 +0000 UTC m=+0.174682764 container init a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:13:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 124 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=124 pruub=12.433906555s) [0] r=-1 lpr=124 pi=[97,124)/1 crt=70'389 mlcod 0'0 active pruub 213.158920288s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 124 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=124 pruub=12.432840347s) [0] r=-1 lpr=124 pi=[97,124)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 213.158920288s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:26 compute-0 podman[107960]: 2025-10-01 13:13:26.530611516 +0000 UTC m=+0.185599505 container start a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:13:26 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 124 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=124) [0] r=0 lpr=124 pi=[97,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:26 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct 01 13:13:26 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 124 pg[9.19( v 70'389 (0'0,70'389] local-lis/les=123/124 n=5 ec=73/41 lis/c=121/80 les/c/f=122/81/0 sis=123) [2] r=0 lpr=123 pi=[80,123)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:26 compute-0 podman[107960]: 2025-10-01 13:13:26.538387187 +0000 UTC m=+0.193375156 container attach a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct 01 13:13:26 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct 01 13:13:26 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct 01 13:13:26 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct 01 13:13:26 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct 01 13:13:26 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct 01 13:13:27 compute-0 ceph-mon[74802]: pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 01 13:13:27 compute-0 ceph-mon[74802]: osdmap e124: 3 total, 3 up, 3 in
Oct 01 13:13:27 compute-0 ceph-mon[74802]: 6.b scrub starts
Oct 01 13:13:27 compute-0 ceph-mon[74802]: 6.b scrub ok
Oct 01 13:13:27 compute-0 ceph-mon[74802]: 10.1e scrub starts
Oct 01 13:13:27 compute-0 ceph-mon[74802]: 10.1e scrub ok
Oct 01 13:13:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct 01 13:13:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct 01 13:13:27 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct 01 13:13:27 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 125 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[97,125)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:27 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 125 pg[9.1c( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] r=-1 lpr=125 pi=[97,125)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 125 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] r=0 lpr=125 pi=[97,125)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:27 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 125 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=97/98 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] r=0 lpr=125 pi=[97,125)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:27 compute-0 compassionate_darwin[107977]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:13:27 compute-0 compassionate_darwin[107977]: --> relative data size: 1.0
Oct 01 13:13:27 compute-0 compassionate_darwin[107977]: --> All data devices are unavailable
Oct 01 13:13:27 compute-0 systemd[1]: libpod-a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf.scope: Deactivated successfully.
Oct 01 13:13:27 compute-0 systemd[1]: libpod-a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf.scope: Consumed 1.107s CPU time.
Oct 01 13:13:27 compute-0 podman[107960]: 2025-10-01 13:13:27.684586551 +0000 UTC m=+1.339574500 container died a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:13:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-46abffe664835ac0e9b2e722eaca4c485f0420e79647cbb6249b1be1216715fa-merged.mount: Deactivated successfully.
Oct 01 13:13:27 compute-0 podman[107960]: 2025-10-01 13:13:27.755679613 +0000 UTC m=+1.410667572 container remove a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_darwin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 13:13:27 compute-0 systemd[1]: libpod-conmon-a416ddd580059a9a12c1666f6de05317e2b6ed9aa71c4c54d490d6017ba0b2bf.scope: Deactivated successfully.
Oct 01 13:13:27 compute-0 sudo[107853]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct 01 13:13:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 01 13:13:27 compute-0 sudo[108020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:13:27 compute-0 sudo[108020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:27 compute-0 sudo[108020]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:27 compute-0 sudo[108045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:13:27 compute-0 sudo[108045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:27 compute-0 sudo[108045]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:28 compute-0 sudo[108070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:13:28 compute-0 sudo[108070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:28 compute-0 sudo[108070]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:28 compute-0 sudo[108095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:13:28 compute-0 sudo[108095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:28 compute-0 podman[108163]: 2025-10-01 13:13:28.478405733 +0000 UTC m=+0.053969850 container create 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:13:28 compute-0 systemd[1]: Started libpod-conmon-14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471.scope.
Oct 01 13:13:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct 01 13:13:28 compute-0 ceph-mon[74802]: 4.11 scrub starts
Oct 01 13:13:28 compute-0 ceph-mon[74802]: 4.11 scrub ok
Oct 01 13:13:28 compute-0 ceph-mon[74802]: osdmap e125: 3 total, 3 up, 3 in
Oct 01 13:13:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 01 13:13:28 compute-0 podman[108163]: 2025-10-01 13:13:28.44792894 +0000 UTC m=+0.023493027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:13:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 01 13:13:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct 01 13:13:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:13:28 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct 01 13:13:28 compute-0 podman[108163]: 2025-10-01 13:13:28.572096954 +0000 UTC m=+0.147661061 container init 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 13:13:28 compute-0 podman[108163]: 2025-10-01 13:13:28.583705798 +0000 UTC m=+0.159269875 container start 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:13:28 compute-0 podman[108163]: 2025-10-01 13:13:28.587098958 +0000 UTC m=+0.162663035 container attach 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:13:28 compute-0 modest_panini[108179]: 167 167
Oct 01 13:13:28 compute-0 systemd[1]: libpod-14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471.scope: Deactivated successfully.
Oct 01 13:13:28 compute-0 podman[108163]: 2025-10-01 13:13:28.590358072 +0000 UTC m=+0.165922149 container died 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cd54492720f394be760e19330d3f368269709556790f9056a2ef958b6ee3a89-merged.mount: Deactivated successfully.
Oct 01 13:13:28 compute-0 podman[108163]: 2025-10-01 13:13:28.632804981 +0000 UTC m=+0.208369058 container remove 14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 13:13:28 compute-0 systemd[1]: libpod-conmon-14891ca1f6a8ed02d4bd167e4fcf4cb5d710d2332ceff2a6a9c401e16e0cd471.scope: Deactivated successfully.
Oct 01 13:13:28 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Oct 01 13:13:28 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 126 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=125/126 n=5 ec=73/41 lis/c=97/97 les/c/f=98/98/0 sis=125) [0]/[2] async=[0] r=0 lpr=125 pi=[97,125)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:28 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Oct 01 13:13:28 compute-0 podman[108202]: 2025-10-01 13:13:28.869826243 +0000 UTC m=+0.082971866 container create c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:13:28 compute-0 podman[108202]: 2025-10-01 13:13:28.827640613 +0000 UTC m=+0.040786266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:13:28 compute-0 systemd[1]: Started libpod-conmon-c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1.scope.
Oct 01 13:13:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:28 compute-0 podman[108202]: 2025-10-01 13:13:28.980547272 +0000 UTC m=+0.193692955 container init c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 01 13:13:28 compute-0 podman[108202]: 2025-10-01 13:13:28.987466316 +0000 UTC m=+0.200611959 container start c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:13:28 compute-0 podman[108202]: 2025-10-01 13:13:28.991402082 +0000 UTC m=+0.204547745 container attach c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:13:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct 01 13:13:29 compute-0 ceph-mon[74802]: pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:29 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 01 13:13:29 compute-0 ceph-mon[74802]: osdmap e126: 3 total, 3 up, 3 in
Oct 01 13:13:29 compute-0 ceph-mon[74802]: 10.d deep-scrub starts
Oct 01 13:13:29 compute-0 ceph-mon[74802]: 10.d deep-scrub ok
Oct 01 13:13:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct 01 13:13:29 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct 01 13:13:29 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 127 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=125/126 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127 pruub=15.120691299s) [0] async=[0] r=-1 lpr=127 pi=[97,127)/1 crt=70'389 mlcod 70'389 active pruub 218.893310547s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:29 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 127 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=125/126 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127 pruub=15.120597839s) [0] r=-1 lpr=127 pi=[97,127)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 218.893310547s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:29 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 127 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127) [0] r=0 lpr=127 pi=[97,127)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:29 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 127 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127) [0] r=0 lpr=127 pi=[97,127)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:29 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 01 13:13:29 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]: {
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:     "0": [
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:         {
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "devices": [
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "/dev/loop3"
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             ],
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_name": "ceph_lv0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_size": "21470642176",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "name": "ceph_lv0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "tags": {
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.cluster_name": "ceph",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.crush_device_class": "",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.encrypted": "0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.osd_id": "0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.type": "block",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.vdo": "0"
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             },
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "type": "block",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "vg_name": "ceph_vg0"
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:         }
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:     ],
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:     "1": [
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:         {
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "devices": [
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "/dev/loop4"
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             ],
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_name": "ceph_lv1",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_size": "21470642176",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "name": "ceph_lv1",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "tags": {
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.cluster_name": "ceph",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.crush_device_class": "",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.encrypted": "0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.osd_id": "1",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.type": "block",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.vdo": "0"
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             },
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "type": "block",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "vg_name": "ceph_vg1"
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:         }
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:     ],
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:     "2": [
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:         {
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "devices": [
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "/dev/loop5"
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             ],
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_name": "ceph_lv2",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_size": "21470642176",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "name": "ceph_lv2",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "tags": {
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.cluster_name": "ceph",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.crush_device_class": "",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.encrypted": "0",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.osd_id": "2",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.type": "block",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:                 "ceph.vdo": "0"
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             },
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "type": "block",
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:             "vg_name": "ceph_vg2"
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:         }
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]:     ]
Oct 01 13:13:29 compute-0 distracted_northcutt[108219]: }
Oct 01 13:13:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct 01 13:13:29 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 01 13:13:29 compute-0 systemd[1]: libpod-c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1.scope: Deactivated successfully.
Oct 01 13:13:29 compute-0 podman[108228]: 2025-10-01 13:13:29.836460397 +0000 UTC m=+0.026936589 container died c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fcaa6067e2259a0c07fa20809bd09f877b7a9120a86356bb8957b937037ae2e-merged.mount: Deactivated successfully.
Oct 01 13:13:29 compute-0 podman[108228]: 2025-10-01 13:13:29.903719375 +0000 UTC m=+0.094195507 container remove c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:13:29 compute-0 systemd[1]: libpod-conmon-c82253b0530b5dd6671166c9725380f074328ec7eac7d277c6c9f38fa4c92fe1.scope: Deactivated successfully.
Oct 01 13:13:29 compute-0 sudo[108095]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:30 compute-0 sudo[108243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:13:30 compute-0 sudo[108243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:30 compute-0 sudo[108243]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:30 compute-0 sudo[108268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:13:30 compute-0 sudo[108268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:30 compute-0 sudo[108268]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:30 compute-0 sudo[107492]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:30 compute-0 sudo[108293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:13:30 compute-0 sudo[108293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:30 compute-0 sudo[108293]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:30 compute-0 sudo[108319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:13:30 compute-0 sudo[108319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:30 compute-0 podman[108476]: 2025-10-01 13:13:30.515865721 +0000 UTC m=+0.044588028 container create 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:13:30 compute-0 systemd[1]: Started libpod-conmon-2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632.scope.
Oct 01 13:13:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct 01 13:13:30 compute-0 ceph-mon[74802]: osdmap e127: 3 total, 3 up, 3 in
Oct 01 13:13:30 compute-0 ceph-mon[74802]: 10.8 scrub starts
Oct 01 13:13:30 compute-0 ceph-mon[74802]: 10.8 scrub ok
Oct 01 13:13:30 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 01 13:13:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 01 13:13:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct 01 13:13:30 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct 01 13:13:30 compute-0 podman[108476]: 2025-10-01 13:13:30.501303992 +0000 UTC m=+0.030026289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:13:30 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 128 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=128 pruub=15.895023346s) [0] r=-1 lpr=128 pi=[89,128)/1 crt=70'389 mlcod 0'0 active pruub 220.694305420s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:30 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 128 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=128 pruub=15.894518852s) [0] r=-1 lpr=128 pi=[89,128)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 220.694305420s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:30 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=128) [0] r=0 lpr=128 pi=[89,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:30 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 128 pg[9.1c( v 70'389 (0'0,70'389] local-lis/les=127/128 n=5 ec=73/41 lis/c=125/97 les/c/f=126/98/0 sis=127) [0] r=0 lpr=127 pi=[97,127)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:13:30 compute-0 podman[108476]: 2025-10-01 13:13:30.636017915 +0000 UTC m=+0.164740312 container init 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 13:13:30 compute-0 podman[108476]: 2025-10-01 13:13:30.648539798 +0000 UTC m=+0.177262135 container start 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:13:30 compute-0 podman[108476]: 2025-10-01 13:13:30.651854445 +0000 UTC m=+0.180576782 container attach 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 13:13:30 compute-0 great_aryabhata[108521]: 167 167
Oct 01 13:13:30 compute-0 systemd[1]: libpod-2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632.scope: Deactivated successfully.
Oct 01 13:13:30 compute-0 podman[108476]: 2025-10-01 13:13:30.656202505 +0000 UTC m=+0.184924832 container died 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 13:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff28626ae7851121d9dc53108950e1f9f5659f7f4c29e159dac5a581677d26ef-merged.mount: Deactivated successfully.
Oct 01 13:13:30 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 01 13:13:30 compute-0 sudo[108553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gluwkeihwbujpowpmxasewfwmcrjrwxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324410.3304298-128-196785642992236/AnsiballZ_command.py'
Oct 01 13:13:30 compute-0 sudo[108553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:30 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 01 13:13:30 compute-0 podman[108476]: 2025-10-01 13:13:30.697911081 +0000 UTC m=+0.226633408 container remove 2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:13:30 compute-0 systemd[1]: libpod-conmon-2c6996d44715823682c9307fbfe4df33f73d4f8a8f6797029a0085cf8c661632.scope: Deactivated successfully.
Oct 01 13:13:30 compute-0 python3.9[108561]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:13:30 compute-0 podman[108574]: 2025-10-01 13:13:30.921954914 +0000 UTC m=+0.065039318 container create e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:13:30 compute-0 systemd[1]: Started libpod-conmon-e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970.scope.
Oct 01 13:13:30 compute-0 podman[108574]: 2025-10-01 13:13:30.892234275 +0000 UTC m=+0.035318739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:13:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:13:31 compute-0 podman[108574]: 2025-10-01 13:13:31.012988638 +0000 UTC m=+0.156073062 container init e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:13:31 compute-0 podman[108574]: 2025-10-01 13:13:31.025935345 +0000 UTC m=+0.169019729 container start e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 13:13:31 compute-0 podman[108574]: 2025-10-01 13:13:31.028993874 +0000 UTC m=+0.172078258 container attach e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:13:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct 01 13:13:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct 01 13:13:31 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct 01 13:13:31 compute-0 ceph-mon[74802]: pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 01 13:13:31 compute-0 ceph-mon[74802]: osdmap e128: 3 total, 3 up, 3 in
Oct 01 13:13:31 compute-0 ceph-mon[74802]: 10.4 scrub starts
Oct 01 13:13:31 compute-0 ceph-mon[74802]: 10.4 scrub ok
Oct 01 13:13:31 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] r=-1 lpr=129 pi=[89,129)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:31 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] r=-1 lpr=129 pi=[89,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:31 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 129 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] r=0 lpr=129 pi=[89,129)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:31 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 129 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] r=0 lpr=129 pi=[89,129)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:31 compute-0 sudo[108553]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct 01 13:13:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:13:31 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 01 13:13:31 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 01 13:13:32 compute-0 sweet_allen[108591]: {
Oct 01 13:13:32 compute-0 sweet_allen[108591]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "osd_id": 0,
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "type": "bluestore"
Oct 01 13:13:32 compute-0 sweet_allen[108591]:     },
Oct 01 13:13:32 compute-0 sweet_allen[108591]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "osd_id": 2,
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "type": "bluestore"
Oct 01 13:13:32 compute-0 sweet_allen[108591]:     },
Oct 01 13:13:32 compute-0 sweet_allen[108591]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "osd_id": 1,
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:13:32 compute-0 sweet_allen[108591]:         "type": "bluestore"
Oct 01 13:13:32 compute-0 sweet_allen[108591]:     }
Oct 01 13:13:32 compute-0 sweet_allen[108591]: }
Oct 01 13:13:32 compute-0 systemd[1]: libpod-e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970.scope: Deactivated successfully.
Oct 01 13:13:32 compute-0 podman[108574]: 2025-10-01 13:13:32.034998308 +0000 UTC m=+1.178082692 container died e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-acf9296e79b09c04bb225093ec0d5e9cd017d79f027cd95743293f9f514bee35-merged.mount: Deactivated successfully.
Oct 01 13:13:32 compute-0 podman[108574]: 2025-10-01 13:13:32.093534395 +0000 UTC m=+1.236618779 container remove e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_allen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 01 13:13:32 compute-0 systemd[1]: libpod-conmon-e2ac9e16eceb3034069bdde81a451b6ed19a0f27c49762f190c06db1d9f3d970.scope: Deactivated successfully.
Oct 01 13:13:32 compute-0 sudo[108319]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:13:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:13:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:13:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:13:32 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 401be56a-6046-4c5b-a34c-e317fa9245ed does not exist
Oct 01 13:13:32 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2945ce16-a488-4427-a4ff-ceed9baa97c6 does not exist
Oct 01 13:13:32 compute-0 sudo[108845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:13:32 compute-0 sudo[108845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:32 compute-0 sudo[108845]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:32 compute-0 sudo[108870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:13:32 compute-0 sudo[108870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:13:32 compute-0 sudo[108870]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct 01 13:13:32 compute-0 ceph-mon[74802]: osdmap e129: 3 total, 3 up, 3 in
Oct 01 13:13:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 01 13:13:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:13:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:13:32 compute-0 sudo[108968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jftmyakgocaqhjstfeinqzddrieafrwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324411.9041748-136-117408543600492/AnsiballZ_selinux.py'
Oct 01 13:13:32 compute-0 sudo[108968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:13:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct 01 13:13:32 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct 01 13:13:32 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 130 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=130 pruub=13.869067192s) [1] r=-1 lpr=130 pi=[89,130)/1 crt=70'389 mlcod 0'0 active pruub 220.694320679s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:32 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 130 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=130 pruub=13.868596077s) [1] r=-1 lpr=130 pi=[89,130)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 220.694320679s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:32 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 130 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=130) [1] r=0 lpr=130 pi=[89,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:32 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 130 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=129/130 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [0]/[2] async=[0] r=0 lpr=129 pi=[89,129)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:32 compute-0 python3.9[108970]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 01 13:13:32 compute-0 sudo[108968]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:33 compute-0 sudo[109120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yekbnjxnwnhpnwvacpfbefwmoxfircnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324413.2171392-147-55993851149140/AnsiballZ_command.py'
Oct 01 13:13:33 compute-0 sudo[109120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct 01 13:13:33 compute-0 ceph-mon[74802]: pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:33 compute-0 ceph-mon[74802]: 4.e scrub starts
Oct 01 13:13:33 compute-0 ceph-mon[74802]: 4.e scrub ok
Oct 01 13:13:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 01 13:13:33 compute-0 ceph-mon[74802]: osdmap e130: 3 total, 3 up, 3 in
Oct 01 13:13:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct 01 13:13:33 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct 01 13:13:33 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 131 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131) [0] r=0 lpr=131 pi=[89,131)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:33 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 131 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131) [0] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:33 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 131 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=129/130 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131 pruub=14.994687080s) [0] async=[0] r=-1 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 70'389 active pruub 222.833190918s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:33 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 131 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=129/130 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131 pruub=14.994582176s) [0] r=-1 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 222.833190918s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:33 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 131 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:33 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 131 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=89/90 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:33 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 131 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] r=-1 lpr=131 pi=[89,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:33 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 131 pg[9.1f( empty local-lis/les=0/0 n=0 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] r=-1 lpr=131 pi=[89,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:33 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 01 13:13:33 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 01 13:13:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 01 13:13:33 compute-0 python3.9[109122]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 01 13:13:33 compute-0 sudo[109120]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:34 compute-0 sudo[109272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-femfzadpauhkmouqqviovwxwrcyhtrjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324414.0259643-155-181208246256870/AnsiballZ_file.py'
Oct 01 13:13:34 compute-0 sudo[109272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:34 compute-0 python3.9[109274]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:13:34 compute-0 sudo[109272]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:34 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 01 13:13:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct 01 13:13:34 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 01 13:13:34 compute-0 ceph-mon[74802]: osdmap e131: 3 total, 3 up, 3 in
Oct 01 13:13:34 compute-0 ceph-mon[74802]: 10.7 scrub starts
Oct 01 13:13:34 compute-0 ceph-mon[74802]: 10.7 scrub ok
Oct 01 13:13:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct 01 13:13:34 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct 01 13:13:34 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 132 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=131/132 n=5 ec=73/41 lis/c=89/89 les/c/f=90/90/0 sis=131) [1]/[2] async=[1] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:34 compute-0 ceph-osd[88455]: osd.0 pg_epoch: 132 pg[9.1e( v 70'389 (0'0,70'389] local-lis/les=131/132 n=5 ec=73/41 lis/c=129/89 les/c/f=130/90/0 sis=131) [0] r=0 lpr=131 pi=[89,131)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:35 compute-0 sudo[109424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imwzylusudbkjesthzovivvjsvxakfjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324414.7719998-163-63827387030000/AnsiballZ_mount.py'
Oct 01 13:13:35 compute-0 sudo[109424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct 01 13:13:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct 01 13:13:35 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct 01 13:13:35 compute-0 python3.9[109426]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 01 13:13:35 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 133 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=131/132 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133 pruub=15.146118164s) [1] async=[1] r=-1 lpr=133 pi=[89,133)/1 crt=70'389 mlcod 70'389 active pruub 224.864700317s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:35 compute-0 ceph-osd[90500]: osd.2 pg_epoch: 133 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=131/132 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133 pruub=15.146004677s) [1] r=-1 lpr=133 pi=[89,133)/1 crt=70'389 mlcod 0'0 unknown NOTIFY pruub 224.864700317s@ mbc={}] state<Start>: transitioning to Stray
Oct 01 13:13:35 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 133 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133) [1] r=0 lpr=133 pi=[89,133)/1 luod=0'0 crt=70'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 01 13:13:35 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 133 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=0/0 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133) [1] r=0 lpr=133 pi=[89,133)/1 crt=70'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 01 13:13:35 compute-0 sudo[109424]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:35 compute-0 ceph-mon[74802]: pgmap v265: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 01 13:13:35 compute-0 ceph-mon[74802]: 10.1 scrub starts
Oct 01 13:13:35 compute-0 ceph-mon[74802]: 10.1 scrub ok
Oct 01 13:13:35 compute-0 ceph-mon[74802]: osdmap e132: 3 total, 3 up, 3 in
Oct 01 13:13:35 compute-0 ceph-mon[74802]: osdmap e133: 3 total, 3 up, 3 in
Oct 01 13:13:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 01 13:13:35 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 01 13:13:35 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 01 13:13:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct 01 13:13:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct 01 13:13:36 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct 01 13:13:36 compute-0 ceph-osd[89484]: osd.1 pg_epoch: 134 pg[9.1f( v 70'389 (0'0,70'389] local-lis/les=133/134 n=5 ec=73/41 lis/c=131/89 les/c/f=132/90/0 sis=133) [1] r=0 lpr=133 pi=[89,133)/1 crt=70'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 01 13:13:36 compute-0 sudo[109576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldzdpteqbwfmmuaadkqbjruqipuxjlws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324416.2899363-191-71937393131287/AnsiballZ_file.py'
Oct 01 13:13:36 compute-0 sudo[109576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:36 compute-0 python3.9[109578]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:13:36 compute-0 sudo[109576]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:37 compute-0 sudo[109728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcxrgtvwobnzzufgbrmzlwpszqbgpvpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324417.0952306-199-95078176788335/AnsiballZ_stat.py'
Oct 01 13:13:37 compute-0 sudo[109728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:37 compute-0 ceph-mon[74802]: pgmap v268: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 145 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct 01 13:13:37 compute-0 ceph-mon[74802]: 4.1b scrub starts
Oct 01 13:13:37 compute-0 ceph-mon[74802]: 4.1b scrub ok
Oct 01 13:13:37 compute-0 ceph-mon[74802]: osdmap e134: 3 total, 3 up, 3 in
Oct 01 13:13:37 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 01 13:13:37 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 01 13:13:37 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 01 13:13:37 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 01 13:13:37 compute-0 python3.9[109730]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:13:37 compute-0 sudo[109728]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 2 objects/s recovering
Oct 01 13:13:37 compute-0 sudo[109808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzpvsjtyaqabmmrwgecikdauhgfiydrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324417.0952306-199-95078176788335/AnsiballZ_file.py'
Oct 01 13:13:37 compute-0 sudo[109808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:38 compute-0 python3.9[109810]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:13:38 compute-0 sudo[109808]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:38 compute-0 sshd-session[109731]: Received disconnect from 200.7.101.139 port 50874:11: Bye Bye [preauth]
Oct 01 13:13:38 compute-0 sshd-session[109731]: Disconnected from authenticating user root 200.7.101.139 port 50874 [preauth]
Oct 01 13:13:38 compute-0 ceph-mon[74802]: 6.d scrub starts
Oct 01 13:13:38 compute-0 ceph-mon[74802]: 6.d scrub ok
Oct 01 13:13:38 compute-0 ceph-mon[74802]: 10.16 scrub starts
Oct 01 13:13:38 compute-0 ceph-mon[74802]: 10.16 scrub ok
Oct 01 13:13:38 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Oct 01 13:13:38 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Oct 01 13:13:39 compute-0 sudo[109960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uozuumufahuttpezuoyvxqiglkvjczlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324418.7196057-223-200682968531828/AnsiballZ_getent.py'
Oct 01 13:13:39 compute-0 sudo[109960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:39 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct 01 13:13:39 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct 01 13:13:39 compute-0 python3.9[109962]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 01 13:13:39 compute-0 sudo[109960]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:39 compute-0 ceph-mon[74802]: pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 2 objects/s recovering
Oct 01 13:13:39 compute-0 ceph-mon[74802]: 10.e deep-scrub starts
Oct 01 13:13:39 compute-0 ceph-mon[74802]: 10.e deep-scrub ok
Oct 01 13:13:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 01 13:13:39 compute-0 sshd-session[109963]: Invalid user user from 80.253.31.232 port 55436
Oct 01 13:13:40 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 01 13:13:40 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 01 13:13:40 compute-0 sshd-session[109963]: Received disconnect from 80.253.31.232 port 55436:11: Bye Bye [preauth]
Oct 01 13:13:40 compute-0 sshd-session[109963]: Disconnected from invalid user user 80.253.31.232 port 55436 [preauth]
Oct 01 13:13:40 compute-0 sudo[110115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewdemnsnxivuoswvvgfxhqntlkqemmjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324419.7939525-233-41289089817272/AnsiballZ_getent.py'
Oct 01 13:13:40 compute-0 sudo[110115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:40 compute-0 python3.9[110117]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 01 13:13:40 compute-0 sudo[110115]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:40 compute-0 ceph-mon[74802]: 8.1 scrub starts
Oct 01 13:13:40 compute-0 ceph-mon[74802]: 8.1 scrub ok
Oct 01 13:13:41 compute-0 sudo[110268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzszsbxvrvsaykwsszfpfpzrayfnmebv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324420.5017755-241-45628454918809/AnsiballZ_group.py'
Oct 01 13:13:41 compute-0 sudo[110268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:41 compute-0 python3.9[110270]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 13:13:41 compute-0 sudo[110268]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:41 compute-0 ceph-mon[74802]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 01 13:13:41 compute-0 ceph-mon[74802]: 4.1c scrub starts
Oct 01 13:13:41 compute-0 ceph-mon[74802]: 4.1c scrub ok
Oct 01 13:13:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Oct 01 13:13:41 compute-0 sudo[110420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pespcfoorsyjcqerfxyvpxmwavlxdnee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324421.5172474-250-90393547199640/AnsiballZ_file.py'
Oct 01 13:13:41 compute-0 sudo[110420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:42 compute-0 python3.9[110422]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 01 13:13:42 compute-0 sudo[110420]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:42 compute-0 sudo[110572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twfecukcbxskhhfgqwwywrgdeijeifgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324422.4835374-261-105094289197093/AnsiballZ_dnf.py'
Oct 01 13:13:42 compute-0 sudo[110572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:43 compute-0 python3.9[110574]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:13:43 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct 01 13:13:43 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct 01 13:13:43 compute-0 ceph-mon[74802]: pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Oct 01 13:13:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:13:44 compute-0 sudo[110572]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:44 compute-0 ceph-mon[74802]: 9.2 scrub starts
Oct 01 13:13:44 compute-0 ceph-mon[74802]: 9.2 scrub ok
Oct 01 13:13:44 compute-0 sudo[110725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjpfaqvcjfevqovjqgprzgkltvoikucj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324424.5362127-269-268169861518056/AnsiballZ_file.py'
Oct 01 13:13:44 compute-0 sudo[110725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:45 compute-0 python3.9[110727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:13:45 compute-0 sudo[110725]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:45 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 01 13:13:45 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 01 13:13:45 compute-0 ceph-mon[74802]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:13:45 compute-0 sudo[110877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzcggiscqzgvqlehzojfzlfercgrypxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324425.3535995-277-94416565365895/AnsiballZ_stat.py'
Oct 01 13:13:45 compute-0 sudo[110877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:13:45 compute-0 python3.9[110879]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:13:45 compute-0 sudo[110877]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:46 compute-0 sudo[110955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekyqhwlezqeyysvyycqbwlhcatwrglbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324425.3535995-277-94416565365895/AnsiballZ_file.py'
Oct 01 13:13:46 compute-0 sudo[110955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:46 compute-0 python3.9[110957]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:13:46 compute-0 sudo[110955]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:46 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct 01 13:13:46 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct 01 13:13:46 compute-0 ceph-mon[74802]: 8.3 scrub starts
Oct 01 13:13:46 compute-0 ceph-mon[74802]: 8.3 scrub ok
Oct 01 13:13:47 compute-0 sudo[111107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grumouloqnqqyqscghilufgcqglfxzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324426.654736-290-213623860944558/AnsiballZ_stat.py'
Oct 01 13:13:47 compute-0 sudo[111107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:47 compute-0 python3.9[111109]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:13:47 compute-0 sudo[111107]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:47 compute-0 sudo[111185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apqnuxyjgonvazzacnuyqtrzfdyuwheu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324426.654736-290-213623860944558/AnsiballZ_file.py'
Oct 01 13:13:47 compute-0 sudo[111185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:47 compute-0 ceph-mon[74802]: pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:13:47 compute-0 ceph-mon[74802]: 9.4 scrub starts
Oct 01 13:13:47 compute-0 ceph-mon[74802]: 9.4 scrub ok
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:13:47
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control']
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:13:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:13:47 compute-0 python3.9[111187]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:13:47 compute-0 sudo[111185]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:48 compute-0 sudo[111337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjuybzaizgwrewkvixfbqsqwixeudxuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324428.2510817-305-89146450583703/AnsiballZ_dnf.py'
Oct 01 13:13:48 compute-0 sudo[111337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:48 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 01 13:13:48 compute-0 ceph-mon[74802]: pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 01 13:13:48 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 01 13:13:48 compute-0 python3.9[111339]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:13:49 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 01 13:13:49 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 01 13:13:49 compute-0 ceph-mon[74802]: 10.9 scrub starts
Oct 01 13:13:49 compute-0 ceph-mon[74802]: 10.9 scrub ok
Oct 01 13:13:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:50 compute-0 sudo[111337]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:50 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct 01 13:13:50 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct 01 13:13:50 compute-0 ceph-mon[74802]: 8.5 scrub starts
Oct 01 13:13:50 compute-0 ceph-mon[74802]: 8.5 scrub ok
Oct 01 13:13:50 compute-0 ceph-mon[74802]: pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:50 compute-0 python3.9[111490]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:13:51 compute-0 ceph-mon[74802]: 8.7 scrub starts
Oct 01 13:13:51 compute-0 ceph-mon[74802]: 8.7 scrub ok
Oct 01 13:13:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:51 compute-0 python3.9[111642]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 01 13:13:52 compute-0 python3.9[111792]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:13:52 compute-0 ceph-mon[74802]: pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:52 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 01 13:13:52 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 01 13:13:53 compute-0 sudo[111942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsdwstnplgdsmspidplojsqcjmhqwezc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324432.9959075-346-210519814459966/AnsiballZ_systemd.py'
Oct 01 13:13:53 compute-0 sudo[111942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:53 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct 01 13:13:53 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct 01 13:13:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:53 compute-0 ceph-mon[74802]: 4.1a scrub starts
Oct 01 13:13:53 compute-0 ceph-mon[74802]: 4.1a scrub ok
Oct 01 13:13:53 compute-0 ceph-mon[74802]: 8.10 scrub starts
Oct 01 13:13:53 compute-0 ceph-mon[74802]: 8.10 scrub ok
Oct 01 13:13:54 compute-0 python3.9[111944]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:13:54 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 01 13:13:54 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct 01 13:13:54 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 01 13:13:54 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 01 13:13:54 compute-0 systemd[76436]: Created slice User Background Tasks Slice.
Oct 01 13:13:54 compute-0 systemd[76436]: Starting Cleanup of User's Temporary Files and Directories...
Oct 01 13:13:54 compute-0 systemd[76436]: Finished Cleanup of User's Temporary Files and Directories.
Oct 01 13:13:54 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 01 13:13:54 compute-0 sudo[111942]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:54 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 01 13:13:54 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 01 13:13:54 compute-0 ceph-mon[74802]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:54 compute-0 ceph-mon[74802]: 8.b scrub starts
Oct 01 13:13:54 compute-0 ceph-mon[74802]: 8.b scrub ok
Oct 01 13:13:55 compute-0 python3.9[112106]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 01 13:13:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:13:55 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 01 13:13:55 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 01 13:13:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:55 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 01 13:13:55 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 01 13:13:55 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 01 13:13:55 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 01 13:13:56 compute-0 ceph-mon[74802]: 8.8 scrub starts
Oct 01 13:13:56 compute-0 ceph-mon[74802]: 8.8 scrub ok
Oct 01 13:13:56 compute-0 ceph-mon[74802]: 11.14 scrub starts
Oct 01 13:13:56 compute-0 ceph-mon[74802]: 11.14 scrub ok
Oct 01 13:13:56 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 01 13:13:56 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:13:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:13:57 compute-0 sudo[112256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsujnwmsjqzkfabzbftmydtcknstaybo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324436.8109086-403-165438555709342/AnsiballZ_systemd.py'
Oct 01 13:13:57 compute-0 sudo[112256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:57 compute-0 python3.9[112258]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:13:57 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 01 13:13:57 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 01 13:13:57 compute-0 sudo[112256]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:57 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 01 13:13:57 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 01 13:13:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:57 compute-0 ceph-mon[74802]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:57 compute-0 ceph-mon[74802]: 4.a scrub starts
Oct 01 13:13:57 compute-0 ceph-mon[74802]: 4.a scrub ok
Oct 01 13:13:57 compute-0 ceph-mon[74802]: 11.4 scrub starts
Oct 01 13:13:57 compute-0 ceph-mon[74802]: 11.4 scrub ok
Oct 01 13:13:58 compute-0 sudo[112410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kybrumdocqohvrczmenufvfesqfgldzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324437.7298324-403-259447446380555/AnsiballZ_systemd.py'
Oct 01 13:13:58 compute-0 sudo[112410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:13:58 compute-0 python3.9[112412]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:13:58 compute-0 sudo[112410]: pam_unix(sudo:session): session closed for user root
Oct 01 13:13:58 compute-0 sshd-session[105727]: Connection closed by 192.168.122.30 port 56882
Oct 01 13:13:58 compute-0 sshd-session[105724]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:13:58 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Oct 01 13:13:58 compute-0 systemd[1]: session-35.scope: Consumed 1min 4.844s CPU time.
Oct 01 13:13:58 compute-0 systemd-logind[818]: Session 35 logged out. Waiting for processes to exit.
Oct 01 13:13:58 compute-0 systemd-logind[818]: Removed session 35.
Oct 01 13:13:58 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Oct 01 13:13:58 compute-0 ceph-mon[74802]: 8.a scrub starts
Oct 01 13:13:58 compute-0 ceph-mon[74802]: 8.a scrub ok
Oct 01 13:13:58 compute-0 ceph-mon[74802]: 8.9 scrub starts
Oct 01 13:13:58 compute-0 ceph-mon[74802]: 8.9 scrub ok
Oct 01 13:13:58 compute-0 ceph-mon[74802]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:58 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Oct 01 13:13:59 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.a deep-scrub starts
Oct 01 13:13:59 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.a deep-scrub ok
Oct 01 13:13:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:13:59 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 01 13:13:59 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 01 13:14:00 compute-0 ceph-mon[74802]: 6.f deep-scrub starts
Oct 01 13:14:00 compute-0 ceph-mon[74802]: 6.f deep-scrub ok
Oct 01 13:14:00 compute-0 ceph-mon[74802]: 9.a deep-scrub starts
Oct 01 13:14:00 compute-0 ceph-mon[74802]: 9.a deep-scrub ok
Oct 01 13:14:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:00 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 01 13:14:00 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 01 13:14:01 compute-0 ceph-mon[74802]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:01 compute-0 ceph-mon[74802]: 10.3 scrub starts
Oct 01 13:14:01 compute-0 ceph-mon[74802]: 10.3 scrub ok
Oct 01 13:14:01 compute-0 ceph-mon[74802]: 8.6 scrub starts
Oct 01 13:14:01 compute-0 ceph-mon[74802]: 8.6 scrub ok
Oct 01 13:14:01 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 01 13:14:01 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 01 13:14:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:01 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct 01 13:14:01 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct 01 13:14:02 compute-0 ceph-mon[74802]: 8.f scrub starts
Oct 01 13:14:02 compute-0 ceph-mon[74802]: 8.f scrub ok
Oct 01 13:14:02 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct 01 13:14:02 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct 01 13:14:03 compute-0 sshd-session[112439]: Invalid user seekcy from 156.236.31.46 port 44080
Oct 01 13:14:03 compute-0 sshd-session[112439]: Received disconnect from 156.236.31.46 port 44080:11: Bye Bye [preauth]
Oct 01 13:14:03 compute-0 sshd-session[112439]: Disconnected from invalid user seekcy 156.236.31.46 port 44080 [preauth]
Oct 01 13:14:03 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 01 13:14:03 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 01 13:14:03 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 01 13:14:03 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 01 13:14:03 compute-0 sshd-session[112441]: Accepted publickey for zuul from 192.168.122.30 port 54818 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:14:03 compute-0 systemd-logind[818]: New session 36 of user zuul.
Oct 01 13:14:03 compute-0 systemd[1]: Started Session 36 of User zuul.
Oct 01 13:14:03 compute-0 sshd-session[112441]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:14:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:03 compute-0 ceph-mon[74802]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:03 compute-0 ceph-mon[74802]: 10.5 scrub starts
Oct 01 13:14:03 compute-0 ceph-mon[74802]: 10.5 scrub ok
Oct 01 13:14:03 compute-0 ceph-mon[74802]: 8.e scrub starts
Oct 01 13:14:03 compute-0 ceph-mon[74802]: 8.e scrub ok
Oct 01 13:14:04 compute-0 python3.9[112594]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:14:04 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 01 13:14:04 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 01 13:14:05 compute-0 ceph-mon[74802]: 9.10 scrub starts
Oct 01 13:14:05 compute-0 ceph-mon[74802]: 9.10 scrub ok
Oct 01 13:14:05 compute-0 ceph-mon[74802]: 11.f scrub starts
Oct 01 13:14:05 compute-0 ceph-mon[74802]: 11.f scrub ok
Oct 01 13:14:05 compute-0 ceph-mon[74802]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:05 compute-0 ceph-mon[74802]: 10.a scrub starts
Oct 01 13:14:05 compute-0 ceph-mon[74802]: 10.a scrub ok
Oct 01 13:14:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:05 compute-0 sudo[112750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpclogowwshjptgbxhfowecaqkrlonff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324445.2435684-36-226741506873595/AnsiballZ_getent.py'
Oct 01 13:14:05 compute-0 sudo[112750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:06 compute-0 python3.9[112752]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 01 13:14:06 compute-0 sudo[112750]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:06 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct 01 13:14:06 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct 01 13:14:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 01 13:14:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 01 13:14:06 compute-0 sshd-session[112675]: Invalid user test from 27.254.137.144 port 49872
Oct 01 13:14:06 compute-0 sudo[112903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghnefrwlwqxlxeigooxggthnfgfqpfcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324446.4815834-48-106749735332209/AnsiballZ_setup.py'
Oct 01 13:14:06 compute-0 sudo[112903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:07 compute-0 sshd-session[112675]: Received disconnect from 27.254.137.144 port 49872:11: Bye Bye [preauth]
Oct 01 13:14:07 compute-0 sshd-session[112675]: Disconnected from invalid user test 27.254.137.144 port 49872 [preauth]
Oct 01 13:14:07 compute-0 ceph-mon[74802]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:07 compute-0 ceph-mon[74802]: 8.13 scrub starts
Oct 01 13:14:07 compute-0 ceph-mon[74802]: 8.13 scrub ok
Oct 01 13:14:07 compute-0 ceph-mon[74802]: 10.c scrub starts
Oct 01 13:14:07 compute-0 ceph-mon[74802]: 10.c scrub ok
Oct 01 13:14:07 compute-0 python3.9[112905]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:14:07 compute-0 sudo[112903]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:07 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 01 13:14:07 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 01 13:14:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:08 compute-0 sudo[112987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkjnkbmprlemcwexhehhlwcjogwamofy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324446.4815834-48-106749735332209/AnsiballZ_dnf.py'
Oct 01 13:14:08 compute-0 sudo[112987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:08 compute-0 ceph-mon[74802]: 8.c scrub starts
Oct 01 13:14:08 compute-0 ceph-mon[74802]: 8.c scrub ok
Oct 01 13:14:08 compute-0 python3.9[112989]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 01 13:14:08 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 01 13:14:08 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 01 13:14:08 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 01 13:14:08 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 01 13:14:09 compute-0 ceph-mon[74802]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:09 compute-0 ceph-mon[74802]: 9.12 scrub starts
Oct 01 13:14:09 compute-0 ceph-mon[74802]: 9.12 scrub ok
Oct 01 13:14:09 compute-0 ceph-mon[74802]: 11.1 scrub starts
Oct 01 13:14:09 compute-0 ceph-mon[74802]: 11.1 scrub ok
Oct 01 13:14:09 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 01 13:14:09 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 01 13:14:09 compute-0 sudo[112987]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:09 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 01 13:14:09 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 01 13:14:09 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 01 13:14:09 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 01 13:14:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:09 compute-0 sudo[113140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwnenirrcgwierdowxuwverigwdymawp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324449.717534-62-159535395436082/AnsiballZ_dnf.py'
Oct 01 13:14:09 compute-0 sudo[113140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:10 compute-0 ceph-mon[74802]: 8.16 scrub starts
Oct 01 13:14:10 compute-0 ceph-mon[74802]: 8.16 scrub ok
Oct 01 13:14:10 compute-0 ceph-mon[74802]: 11.e scrub starts
Oct 01 13:14:10 compute-0 ceph-mon[74802]: 11.e scrub ok
Oct 01 13:14:10 compute-0 ceph-mon[74802]: 10.18 scrub starts
Oct 01 13:14:10 compute-0 ceph-mon[74802]: 10.18 scrub ok
Oct 01 13:14:10 compute-0 python3.9[113142]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:14:10 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct 01 13:14:10 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct 01 13:14:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:10 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Oct 01 13:14:10 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Oct 01 13:14:11 compute-0 ceph-mon[74802]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:11 compute-0 ceph-mon[74802]: 9.14 scrub starts
Oct 01 13:14:11 compute-0 ceph-mon[74802]: 9.14 scrub ok
Oct 01 13:14:11 compute-0 ceph-mon[74802]: 10.1b deep-scrub starts
Oct 01 13:14:11 compute-0 ceph-mon[74802]: 10.1b deep-scrub ok
Oct 01 13:14:11 compute-0 sudo[113140]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:11 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Oct 01 13:14:11 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Oct 01 13:14:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:12 compute-0 ceph-mon[74802]: 8.17 deep-scrub starts
Oct 01 13:14:12 compute-0 ceph-mon[74802]: 8.17 deep-scrub ok
Oct 01 13:14:12 compute-0 sudo[113293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeoeogvhvehfeilonurfugwlubpngtkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324451.5893893-70-137882401499461/AnsiballZ_systemd.py'
Oct 01 13:14:12 compute-0 sudo[113293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:12 compute-0 python3.9[113295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 13:14:12 compute-0 sudo[113293]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:13 compute-0 ceph-mon[74802]: pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:13 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 01 13:14:13 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 01 13:14:13 compute-0 python3.9[113448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:14:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:14 compute-0 ceph-mon[74802]: 8.19 scrub starts
Oct 01 13:14:14 compute-0 ceph-mon[74802]: 8.19 scrub ok
Oct 01 13:14:14 compute-0 sudo[113598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffzasiibmtizubbeubfdonamffhnhjdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324453.8505874-88-235334506202236/AnsiballZ_sefcontext.py'
Oct 01 13:14:14 compute-0 sudo[113598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:14 compute-0 python3.9[113600]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 01 13:14:14 compute-0 sudo[113598]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:15 compute-0 ceph-mon[74802]: pgmap v288: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:15 compute-0 python3.9[113750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:14:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:16 compute-0 sudo[113906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcpofullaufoamrvsfsccwltmmxizixa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324456.2016962-106-86535063258454/AnsiballZ_dnf.py'
Oct 01 13:14:16 compute-0 sudo[113906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:16 compute-0 python3.9[113908]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:14:17 compute-0 ceph-mon[74802]: pgmap v289: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:14:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:17 compute-0 sudo[113906]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:18 compute-0 sudo[114059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohlrfqxlqyeoisfzgzryiqapkzmliiyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324458.0082386-114-125591019098632/AnsiballZ_command.py'
Oct 01 13:14:18 compute-0 sudo[114059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:18 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct 01 13:14:18 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct 01 13:14:18 compute-0 python3.9[114061]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:14:19 compute-0 ceph-mon[74802]: pgmap v290: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:19 compute-0 ceph-mon[74802]: 11.6 scrub starts
Oct 01 13:14:19 compute-0 ceph-mon[74802]: 11.6 scrub ok
Oct 01 13:14:19 compute-0 sudo[114059]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:19 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1c deep-scrub starts
Oct 01 13:14:19 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1c deep-scrub ok
Oct 01 13:14:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:20 compute-0 sudo[114346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzhkytehgsexpleopcuqqptcgqxgrwoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324459.652523-122-96457880173518/AnsiballZ_file.py'
Oct 01 13:14:20 compute-0 sudo[114346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:20 compute-0 ceph-mon[74802]: 10.1c deep-scrub starts
Oct 01 13:14:20 compute-0 ceph-mon[74802]: 10.1c deep-scrub ok
Oct 01 13:14:20 compute-0 python3.9[114348]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 13:14:20 compute-0 sudo[114346]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:20 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 01 13:14:20 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 01 13:14:20 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 01 13:14:20 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 01 13:14:21 compute-0 ceph-mon[74802]: pgmap v291: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:21 compute-0 ceph-mon[74802]: 8.18 scrub starts
Oct 01 13:14:21 compute-0 ceph-mon[74802]: 8.18 scrub ok
Oct 01 13:14:21 compute-0 ceph-mon[74802]: 10.1d scrub starts
Oct 01 13:14:21 compute-0 ceph-mon[74802]: 10.1d scrub ok
Oct 01 13:14:21 compute-0 python3.9[114498]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:14:21 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 01 13:14:21 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 01 13:14:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:21 compute-0 sudo[114650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaignizqeouxzaqgjdsdlitojvdinyky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324461.5785177-138-199992742145382/AnsiballZ_dnf.py'
Oct 01 13:14:21 compute-0 sudo[114650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:22 compute-0 python3.9[114652]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:14:22 compute-0 ceph-mon[74802]: 8.14 scrub starts
Oct 01 13:14:22 compute-0 ceph-mon[74802]: 8.14 scrub ok
Oct 01 13:14:23 compute-0 ceph-mon[74802]: pgmap v292: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:23 compute-0 sudo[114650]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:24 compute-0 sudo[114803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybqcxcdcfmrapfesjwolaeztcqoujlzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324463.658976-147-140196038924320/AnsiballZ_dnf.py'
Oct 01 13:14:24 compute-0 sudo[114803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:24 compute-0 python3.9[114805]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:14:24 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct 01 13:14:24 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct 01 13:14:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:25 compute-0 ceph-mon[74802]: pgmap v293: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:25 compute-0 ceph-mon[74802]: 10.1f scrub starts
Oct 01 13:14:25 compute-0 ceph-mon[74802]: 10.1f scrub ok
Oct 01 13:14:25 compute-0 sudo[114803]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:26 compute-0 sudo[114956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmarbfzhhypnwgjczbkcnmleczspfcpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324465.8892438-159-247842197589233/AnsiballZ_stat.py'
Oct 01 13:14:26 compute-0 sudo[114956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:26 compute-0 python3.9[114958]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:14:26 compute-0 sudo[114956]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:27 compute-0 sudo[115110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiaugeeiqfmmzqauthdzhdiflwzmccqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324466.610372-167-215257440165472/AnsiballZ_slurp.py'
Oct 01 13:14:27 compute-0 sudo[115110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:27 compute-0 python3.9[115112]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 01 13:14:27 compute-0 sudo[115110]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:27 compute-0 ceph-mon[74802]: pgmap v294: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:27 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 01 13:14:27 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 01 13:14:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:28 compute-0 sshd-session[112444]: Connection closed by 192.168.122.30 port 54818
Oct 01 13:14:28 compute-0 sshd-session[112441]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:14:28 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Oct 01 13:14:28 compute-0 systemd[1]: session-36.scope: Consumed 18.327s CPU time.
Oct 01 13:14:28 compute-0 systemd-logind[818]: Session 36 logged out. Waiting for processes to exit.
Oct 01 13:14:28 compute-0 systemd-logind[818]: Removed session 36.
Oct 01 13:14:28 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 01 13:14:28 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 01 13:14:28 compute-0 ceph-mon[74802]: 8.15 scrub starts
Oct 01 13:14:28 compute-0 ceph-mon[74802]: 8.15 scrub ok
Oct 01 13:14:29 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct 01 13:14:29 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct 01 13:14:29 compute-0 ceph-mon[74802]: pgmap v295: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:29 compute-0 ceph-mon[74802]: 8.1f scrub starts
Oct 01 13:14:29 compute-0 ceph-mon[74802]: 8.1f scrub ok
Oct 01 13:14:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:30 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct 01 13:14:30 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct 01 13:14:30 compute-0 ceph-mon[74802]: 11.15 scrub starts
Oct 01 13:14:30 compute-0 ceph-mon[74802]: 11.15 scrub ok
Oct 01 13:14:30 compute-0 ceph-mon[74802]: pgmap v296: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:31 compute-0 ceph-mon[74802]: 11.2 scrub starts
Oct 01 13:14:31 compute-0 ceph-mon[74802]: 11.2 scrub ok
Oct 01 13:14:32 compute-0 sudo[115137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:14:32 compute-0 sudo[115137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:32 compute-0 sudo[115137]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:32 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Oct 01 13:14:32 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Oct 01 13:14:32 compute-0 sudo[115162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:14:32 compute-0 sudo[115162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:32 compute-0 sudo[115162]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:32 compute-0 sudo[115187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:14:32 compute-0 sudo[115187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:32 compute-0 sudo[115187]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:32 compute-0 sudo[115212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:14:32 compute-0 sudo[115212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:32 compute-0 ceph-mon[74802]: pgmap v297: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:33 compute-0 sudo[115212]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:14:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:14:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:14:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:14:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:14:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:14:33 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8aa69581-f101-4483-aee0-e8fbea0d07da does not exist
Oct 01 13:14:33 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 988c2010-35fd-430e-ac0b-3af7c987b839 does not exist
Oct 01 13:14:33 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 597060b1-5161-4bdb-8fa2-5cd01aaa083d does not exist
Oct 01 13:14:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:14:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:14:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:14:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:14:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:14:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:14:33 compute-0 sudo[115269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:14:33 compute-0 sudo[115269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:33 compute-0 sudo[115269]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:33 compute-0 sudo[115296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:14:33 compute-0 sudo[115296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:33 compute-0 sudo[115296]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:33 compute-0 sshd-session[115293]: Accepted publickey for zuul from 192.168.122.30 port 38486 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:14:33 compute-0 systemd-logind[818]: New session 37 of user zuul.
Oct 01 13:14:33 compute-0 systemd[1]: Started Session 37 of User zuul.
Oct 01 13:14:33 compute-0 sudo[115321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:14:33 compute-0 sshd-session[115293]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:14:33 compute-0 sudo[115321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:33 compute-0 sudo[115321]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:33 compute-0 sudo[115348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:14:33 compute-0 sudo[115348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:33 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 01 13:14:33 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 01 13:14:33 compute-0 podman[115465]: 2025-10-01 13:14:33.66640896 +0000 UTC m=+0.035956833 container create 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 13:14:33 compute-0 systemd[1]: Started libpod-conmon-0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947.scope.
Oct 01 13:14:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:14:33 compute-0 podman[115465]: 2025-10-01 13:14:33.745928885 +0000 UTC m=+0.115476808 container init 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:14:33 compute-0 podman[115465]: 2025-10-01 13:14:33.649922616 +0000 UTC m=+0.019470480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:14:33 compute-0 podman[115465]: 2025-10-01 13:14:33.757462101 +0000 UTC m=+0.127009964 container start 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 13:14:33 compute-0 podman[115465]: 2025-10-01 13:14:33.76246904 +0000 UTC m=+0.132016883 container attach 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:14:33 compute-0 admiring_agnesi[115481]: 167 167
Oct 01 13:14:33 compute-0 systemd[1]: libpod-0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947.scope: Deactivated successfully.
Oct 01 13:14:33 compute-0 podman[115465]: 2025-10-01 13:14:33.765166886 +0000 UTC m=+0.134714749 container died 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbf95c5d159267cef43c843a0a1856ee26daeb336e7b7de6abedd35694fb8bbb-merged.mount: Deactivated successfully.
Oct 01 13:14:33 compute-0 podman[115465]: 2025-10-01 13:14:33.816467885 +0000 UTC m=+0.186015728 container remove 0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_agnesi, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:14:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:33 compute-0 systemd[1]: libpod-conmon-0c3a77a8037a8efb55e2bc4f651a568711d3ae0260af6b1b3016274a395d1947.scope: Deactivated successfully.
Oct 01 13:14:33 compute-0 ceph-mon[74802]: 8.1d scrub starts
Oct 01 13:14:33 compute-0 ceph-mon[74802]: 8.1d scrub ok
Oct 01 13:14:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:14:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:14:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:14:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:14:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:14:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:14:33 compute-0 podman[115553]: 2025-10-01 13:14:33.959555838 +0000 UTC m=+0.039190835 container create 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 13:14:33 compute-0 systemd[1]: Started libpod-conmon-88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36.scope.
Oct 01 13:14:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:34 compute-0 podman[115553]: 2025-10-01 13:14:33.942771705 +0000 UTC m=+0.022406742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:14:34 compute-0 podman[115553]: 2025-10-01 13:14:34.043656119 +0000 UTC m=+0.123291146 container init 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:14:34 compute-0 podman[115553]: 2025-10-01 13:14:34.049782432 +0000 UTC m=+0.129417439 container start 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:14:34 compute-0 podman[115553]: 2025-10-01 13:14:34.053829801 +0000 UTC m=+0.133464858 container attach 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:14:34 compute-0 python3.9[115622]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:14:34 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 01 13:14:34 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 01 13:14:34 compute-0 ceph-mon[74802]: 11.19 scrub starts
Oct 01 13:14:34 compute-0 ceph-mon[74802]: 11.19 scrub ok
Oct 01 13:14:34 compute-0 ceph-mon[74802]: pgmap v298: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:34 compute-0 ceph-mon[74802]: 11.3 scrub starts
Oct 01 13:14:34 compute-0 ceph-mon[74802]: 11.3 scrub ok
Oct 01 13:14:35 compute-0 adoring_gauss[115617]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:14:35 compute-0 adoring_gauss[115617]: --> relative data size: 1.0
Oct 01 13:14:35 compute-0 adoring_gauss[115617]: --> All data devices are unavailable
Oct 01 13:14:35 compute-0 systemd[1]: libpod-88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36.scope: Deactivated successfully.
Oct 01 13:14:35 compute-0 podman[115553]: 2025-10-01 13:14:35.040904172 +0000 UTC m=+1.120539249 container died 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2449bfb5c526b783882345d567395236bd4372be72a07bfa9677173432f51d62-merged.mount: Deactivated successfully.
Oct 01 13:14:35 compute-0 podman[115553]: 2025-10-01 13:14:35.100931218 +0000 UTC m=+1.180566225 container remove 88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:14:35 compute-0 systemd[1]: libpod-conmon-88671fef40b740b68e9a5e807be8d9a33083aea25545eea9455280cd64cead36.scope: Deactivated successfully.
Oct 01 13:14:35 compute-0 sudo[115348]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:35 compute-0 python3.9[115794]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:14:35 compute-0 sudo[115815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:14:35 compute-0 sudo[115815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:35 compute-0 sudo[115815]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:35 compute-0 sudo[115843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:14:35 compute-0 sudo[115843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:35 compute-0 sudo[115843]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:35 compute-0 sudo[115877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:14:35 compute-0 sudo[115877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:35 compute-0 sudo[115877]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:35 compute-0 sudo[115911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:14:35 compute-0 sudo[115911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:35 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 01 13:14:35 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 01 13:14:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:35 compute-0 podman[116022]: 2025-10-01 13:14:35.665135892 +0000 UTC m=+0.039150634 container create 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:14:35 compute-0 systemd[1]: Started libpod-conmon-59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32.scope.
Oct 01 13:14:35 compute-0 podman[116022]: 2025-10-01 13:14:35.649684892 +0000 UTC m=+0.023699634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:14:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:14:35 compute-0 podman[116022]: 2025-10-01 13:14:35.765381155 +0000 UTC m=+0.139395897 container init 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:14:35 compute-0 podman[116022]: 2025-10-01 13:14:35.771654144 +0000 UTC m=+0.145668866 container start 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:14:35 compute-0 podman[116022]: 2025-10-01 13:14:35.774444882 +0000 UTC m=+0.148459604 container attach 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 13:14:35 compute-0 nervous_raman[116069]: 167 167
Oct 01 13:14:35 compute-0 systemd[1]: libpod-59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32.scope: Deactivated successfully.
Oct 01 13:14:35 compute-0 podman[116022]: 2025-10-01 13:14:35.776953653 +0000 UTC m=+0.150968395 container died 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ef5d0798d43233df98a4c6a9c1b9d2b444a72fd21c540f90a264ba178719af9-merged.mount: Deactivated successfully.
Oct 01 13:14:35 compute-0 podman[116022]: 2025-10-01 13:14:35.810719914 +0000 UTC m=+0.184734646 container remove 59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 01 13:14:35 compute-0 systemd[1]: libpod-conmon-59c04ec23d97fa477554adf95ee7168fb3dd981115db0e5578d88b46439a7e32.scope: Deactivated successfully.
Oct 01 13:14:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:35 compute-0 podman[116113]: 2025-10-01 13:14:35.962211424 +0000 UTC m=+0.039163784 container create ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:14:35 compute-0 systemd[1]: Started libpod-conmon-ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb.scope.
Oct 01 13:14:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:36 compute-0 podman[116113]: 2025-10-01 13:14:35.947688584 +0000 UTC m=+0.024640964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:14:36 compute-0 podman[116113]: 2025-10-01 13:14:36.044801037 +0000 UTC m=+0.121753417 container init ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:14:36 compute-0 podman[116113]: 2025-10-01 13:14:36.053923897 +0000 UTC m=+0.130876257 container start ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 13:14:36 compute-0 podman[116113]: 2025-10-01 13:14:36.05873762 +0000 UTC m=+0.135689980 container attach ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:14:36 compute-0 python3.9[116208]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:14:36 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct 01 13:14:36 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct 01 13:14:36 compute-0 sshd-session[115346]: Connection closed by 192.168.122.30 port 38486
Oct 01 13:14:36 compute-0 sshd-session[115293]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:14:36 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Oct 01 13:14:36 compute-0 systemd[1]: session-37.scope: Consumed 2.146s CPU time.
Oct 01 13:14:36 compute-0 systemd-logind[818]: Session 37 logged out. Waiting for processes to exit.
Oct 01 13:14:36 compute-0 systemd-logind[818]: Removed session 37.
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]: {
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:     "0": [
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:         {
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "devices": [
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "/dev/loop3"
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             ],
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_name": "ceph_lv0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_size": "21470642176",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "name": "ceph_lv0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "tags": {
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.cluster_name": "ceph",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.crush_device_class": "",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.encrypted": "0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.osd_id": "0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.type": "block",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.vdo": "0"
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             },
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "type": "block",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "vg_name": "ceph_vg0"
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:         }
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:     ],
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:     "1": [
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:         {
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "devices": [
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "/dev/loop4"
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             ],
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_name": "ceph_lv1",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_size": "21470642176",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "name": "ceph_lv1",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "tags": {
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.cluster_name": "ceph",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.crush_device_class": "",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.encrypted": "0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.osd_id": "1",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.type": "block",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.vdo": "0"
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             },
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "type": "block",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "vg_name": "ceph_vg1"
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:         }
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:     ],
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:     "2": [
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:         {
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "devices": [
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "/dev/loop5"
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             ],
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_name": "ceph_lv2",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_size": "21470642176",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "name": "ceph_lv2",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "tags": {
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.cluster_name": "ceph",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.crush_device_class": "",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.encrypted": "0",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.osd_id": "2",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.type": "block",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:                 "ceph.vdo": "0"
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             },
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "type": "block",
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:             "vg_name": "ceph_vg2"
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:         }
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]:     ]
Oct 01 13:14:36 compute-0 hungry_montalcini[116158]: }
Oct 01 13:14:36 compute-0 systemd[1]: libpod-ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb.scope: Deactivated successfully.
Oct 01 13:14:36 compute-0 podman[116113]: 2025-10-01 13:14:36.81524364 +0000 UTC m=+0.892196000 container died ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 13:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a4fbe1fdd42cd0ea981872f26929f7a1c519f8141649be9f8035a48120a99e-merged.mount: Deactivated successfully.
Oct 01 13:14:36 compute-0 podman[116113]: 2025-10-01 13:14:36.863641836 +0000 UTC m=+0.940594196 container remove ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:14:36 compute-0 systemd[1]: libpod-conmon-ecace3978c41adf61aa82d62bd85b6a9ea0646ef0df572dd5c47c4d2fc8553eb.scope: Deactivated successfully.
Oct 01 13:14:36 compute-0 ceph-mon[74802]: 11.17 scrub starts
Oct 01 13:14:36 compute-0 ceph-mon[74802]: 11.17 scrub ok
Oct 01 13:14:36 compute-0 ceph-mon[74802]: pgmap v299: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:36 compute-0 ceph-mon[74802]: 11.d scrub starts
Oct 01 13:14:36 compute-0 ceph-mon[74802]: 11.d scrub ok
Oct 01 13:14:36 compute-0 sudo[115911]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:36 compute-0 sudo[116252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:14:36 compute-0 sudo[116252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:36 compute-0 sudo[116252]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:37 compute-0 sudo[116277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:14:37 compute-0 sudo[116277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:37 compute-0 sudo[116277]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:37 compute-0 sudo[116302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:14:37 compute-0 sudo[116302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:37 compute-0 sudo[116302]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:37 compute-0 sudo[116327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:14:37 compute-0 sudo[116327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:37 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct 01 13:14:37 compute-0 podman[116392]: 2025-10-01 13:14:37.464936439 +0000 UTC m=+0.044797744 container create 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:14:37 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct 01 13:14:37 compute-0 systemd[1]: Started libpod-conmon-5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12.scope.
Oct 01 13:14:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:14:37 compute-0 podman[116392]: 2025-10-01 13:14:37.537470161 +0000 UTC m=+0.117331456 container init 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:14:37 compute-0 podman[116392]: 2025-10-01 13:14:37.445827721 +0000 UTC m=+0.025689046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:14:37 compute-0 podman[116392]: 2025-10-01 13:14:37.544725261 +0000 UTC m=+0.124586556 container start 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:14:37 compute-0 podman[116392]: 2025-10-01 13:14:37.54752712 +0000 UTC m=+0.127388415 container attach 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 13:14:37 compute-0 loving_driscoll[116408]: 167 167
Oct 01 13:14:37 compute-0 podman[116392]: 2025-10-01 13:14:37.550468014 +0000 UTC m=+0.130329319 container died 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:14:37 compute-0 systemd[1]: libpod-5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12.scope: Deactivated successfully.
Oct 01 13:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0df969a32de8bca6f0b78f5d2c9440c21588008b987a4e8931d053b108445618-merged.mount: Deactivated successfully.
Oct 01 13:14:37 compute-0 podman[116392]: 2025-10-01 13:14:37.588722918 +0000 UTC m=+0.168584213 container remove 5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:14:37 compute-0 systemd[1]: libpod-conmon-5d9ce3c1c25b9c6a83dd4e7775bbeb391d3dcc9e6165bd121e49fef1991a3a12.scope: Deactivated successfully.
Oct 01 13:14:37 compute-0 podman[116431]: 2025-10-01 13:14:37.740755126 +0000 UTC m=+0.041473758 container create 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:14:37 compute-0 systemd[1]: Started libpod-conmon-9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699.scope.
Oct 01 13:14:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:14:37 compute-0 podman[116431]: 2025-10-01 13:14:37.725157201 +0000 UTC m=+0.025875853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:14:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:37 compute-0 podman[116431]: 2025-10-01 13:14:37.827022965 +0000 UTC m=+0.127741607 container init 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:14:37 compute-0 podman[116431]: 2025-10-01 13:14:37.839243123 +0000 UTC m=+0.139961755 container start 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:14:37 compute-0 podman[116431]: 2025-10-01 13:14:37.843438006 +0000 UTC m=+0.144156638 container attach 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:14:38 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct 01 13:14:38 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct 01 13:14:38 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct 01 13:14:38 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct 01 13:14:38 compute-0 suspicious_turing[116448]: {
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "osd_id": 0,
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "type": "bluestore"
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:     },
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "osd_id": 2,
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "type": "bluestore"
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:     },
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "osd_id": 1,
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:         "type": "bluestore"
Oct 01 13:14:38 compute-0 suspicious_turing[116448]:     }
Oct 01 13:14:38 compute-0 suspicious_turing[116448]: }
Oct 01 13:14:38 compute-0 systemd[1]: libpod-9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699.scope: Deactivated successfully.
Oct 01 13:14:38 compute-0 podman[116431]: 2025-10-01 13:14:38.803308283 +0000 UTC m=+1.104026935 container died 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 13:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-34fca52c3636036c4822e4c71532764a78141f78d395dc5919a6661e0f85555f-merged.mount: Deactivated successfully.
Oct 01 13:14:38 compute-0 podman[116431]: 2025-10-01 13:14:38.862981267 +0000 UTC m=+1.163699899 container remove 9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:14:38 compute-0 systemd[1]: libpod-conmon-9b40569537c2b85882d890cfaec39382282cbaa7939aef1ef842051a0293b699.scope: Deactivated successfully.
Oct 01 13:14:38 compute-0 ceph-mon[74802]: 11.10 scrub starts
Oct 01 13:14:38 compute-0 ceph-mon[74802]: 11.10 scrub ok
Oct 01 13:14:38 compute-0 ceph-mon[74802]: pgmap v300: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:38 compute-0 sudo[116327]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:14:38 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:14:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:14:38 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:14:38 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 0f1c482c-fbd5-4335-b52e-d599b19cf618 does not exist
Oct 01 13:14:38 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fcce11b1-452a-4827-97ce-f1d06d3c5db5 does not exist
Oct 01 13:14:38 compute-0 sudo[116494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:14:38 compute-0 sudo[116494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:38 compute-0 sudo[116494]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:39 compute-0 sudo[116519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:14:39 compute-0 sudo[116519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:14:39 compute-0 sudo[116519]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:39 compute-0 ceph-mon[74802]: 8.1a scrub starts
Oct 01 13:14:39 compute-0 ceph-mon[74802]: 8.1a scrub ok
Oct 01 13:14:39 compute-0 ceph-mon[74802]: 9.1a scrub starts
Oct 01 13:14:39 compute-0 ceph-mon[74802]: 9.1a scrub ok
Oct 01 13:14:39 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:14:39 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:14:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:40 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct 01 13:14:40 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct 01 13:14:40 compute-0 ceph-mon[74802]: pgmap v301: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:41 compute-0 ceph-mon[74802]: 8.1e scrub starts
Oct 01 13:14:41 compute-0 ceph-mon[74802]: 8.1e scrub ok
Oct 01 13:14:42 compute-0 sshd-session[116544]: Accepted publickey for zuul from 192.168.122.30 port 52940 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:14:42 compute-0 systemd-logind[818]: New session 38 of user zuul.
Oct 01 13:14:42 compute-0 systemd[1]: Started Session 38 of User zuul.
Oct 01 13:14:42 compute-0 sshd-session[116544]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:14:42 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 01 13:14:42 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 01 13:14:42 compute-0 ceph-mon[74802]: pgmap v302: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:42 compute-0 ceph-mon[74802]: 8.2 scrub starts
Oct 01 13:14:42 compute-0 ceph-mon[74802]: 8.2 scrub ok
Oct 01 13:14:42 compute-0 sshd-session[116548]: Invalid user seekcy from 80.253.31.232 port 45004
Oct 01 13:14:43 compute-0 sshd-session[116548]: Received disconnect from 80.253.31.232 port 45004:11: Bye Bye [preauth]
Oct 01 13:14:43 compute-0 sshd-session[116548]: Disconnected from invalid user seekcy 80.253.31.232 port 45004 [preauth]
Oct 01 13:14:43 compute-0 python3.9[116699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:14:43 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Oct 01 13:14:43 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Oct 01 13:14:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:43 compute-0 ceph-mon[74802]: 10.17 scrub starts
Oct 01 13:14:43 compute-0 ceph-mon[74802]: 10.17 scrub ok
Oct 01 13:14:44 compute-0 python3.9[116853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:14:44 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct 01 13:14:44 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct 01 13:14:44 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 01 13:14:44 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 01 13:14:44 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.d deep-scrub starts
Oct 01 13:14:44 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.d deep-scrub ok
Oct 01 13:14:44 compute-0 sudo[117007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jemzbxgixcbsoiprobvkqyhcitjwowjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324484.6541877-40-108364239773882/AnsiballZ_setup.py'
Oct 01 13:14:44 compute-0 sudo[117007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:44 compute-0 ceph-mon[74802]: pgmap v303: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:44 compute-0 ceph-mon[74802]: 10.15 scrub starts
Oct 01 13:14:44 compute-0 ceph-mon[74802]: 10.15 scrub ok
Oct 01 13:14:44 compute-0 ceph-mon[74802]: 11.5 scrub starts
Oct 01 13:14:44 compute-0 ceph-mon[74802]: 11.5 scrub ok
Oct 01 13:14:44 compute-0 ceph-mon[74802]: 8.d deep-scrub starts
Oct 01 13:14:44 compute-0 ceph-mon[74802]: 8.d deep-scrub ok
Oct 01 13:14:45 compute-0 python3.9[117009]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:14:45 compute-0 sudo[117007]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:45 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 01 13:14:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:45 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 01 13:14:45 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Oct 01 13:14:45 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Oct 01 13:14:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:45 compute-0 sudo[117091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xclmizaaaticlbxmpdydiufbajmjvzkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324484.6541877-40-108364239773882/AnsiballZ_dnf.py'
Oct 01 13:14:45 compute-0 sudo[117091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:46 compute-0 ceph-mon[74802]: 9.9 scrub starts
Oct 01 13:14:46 compute-0 ceph-mon[74802]: 9.9 scrub ok
Oct 01 13:14:46 compute-0 ceph-mon[74802]: 8.4 deep-scrub starts
Oct 01 13:14:46 compute-0 ceph-mon[74802]: 8.4 deep-scrub ok
Oct 01 13:14:46 compute-0 python3.9[117093]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:14:47 compute-0 ceph-mon[74802]: pgmap v304: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:47 compute-0 sudo[117091]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:47 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 01 13:14:47 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:14:47
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.rgw.root']
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:14:47 compute-0 sudo[117244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgifthftfzfybarazrybzhdsddktttps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324487.4143639-52-144611270727601/AnsiballZ_setup.py'
Oct 01 13:14:47 compute-0 sudo[117244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:14:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:14:48 compute-0 ceph-mon[74802]: 9.11 scrub starts
Oct 01 13:14:48 compute-0 ceph-mon[74802]: 9.11 scrub ok
Oct 01 13:14:48 compute-0 python3.9[117246]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:14:48 compute-0 sudo[117244]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:48 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct 01 13:14:48 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 01 13:14:48 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct 01 13:14:48 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 01 13:14:49 compute-0 ceph-mon[74802]: pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:49 compute-0 ceph-mon[74802]: 11.9 scrub starts
Oct 01 13:14:49 compute-0 ceph-mon[74802]: 9.d scrub starts
Oct 01 13:14:49 compute-0 ceph-mon[74802]: 11.9 scrub ok
Oct 01 13:14:49 compute-0 ceph-mon[74802]: 9.d scrub ok
Oct 01 13:14:49 compute-0 sudo[117439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htquappbpahzrkopjfrxvndwpobgygne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324488.6997025-63-185281162315936/AnsiballZ_file.py'
Oct 01 13:14:49 compute-0 sudo[117439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:49 compute-0 python3.9[117441]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:14:49 compute-0 sudo[117439]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:49 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.3 deep-scrub starts
Oct 01 13:14:49 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.3 deep-scrub ok
Oct 01 13:14:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:50 compute-0 ceph-mon[74802]: 9.3 deep-scrub starts
Oct 01 13:14:50 compute-0 ceph-mon[74802]: 9.3 deep-scrub ok
Oct 01 13:14:50 compute-0 sudo[117591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubgrixbkhzapfluuttfscbqvrywitheh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324489.725343-71-275210811019411/AnsiballZ_command.py'
Oct 01 13:14:50 compute-0 sudo[117591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:50 compute-0 python3.9[117593]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:14:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:50 compute-0 sudo[117591]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:50 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 01 13:14:50 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 01 13:14:50 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 01 13:14:50 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 01 13:14:51 compute-0 ceph-mon[74802]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:51 compute-0 ceph-mon[74802]: 11.7 scrub starts
Oct 01 13:14:51 compute-0 ceph-mon[74802]: 11.7 scrub ok
Oct 01 13:14:51 compute-0 ceph-mon[74802]: 11.1b scrub starts
Oct 01 13:14:51 compute-0 ceph-mon[74802]: 11.1b scrub ok
Oct 01 13:14:51 compute-0 sudo[117756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofiawplvmubuwrhgehnccqtafywqsshy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324490.7421887-79-4103528112738/AnsiballZ_stat.py'
Oct 01 13:14:51 compute-0 sudo[117756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:51 compute-0 python3.9[117758]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:14:51 compute-0 sudo[117756]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:51 compute-0 sudo[117834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkzhaxwgqauygmkykyyzxndwyqklrysr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324490.7421887-79-4103528112738/AnsiballZ_file.py'
Oct 01 13:14:51 compute-0 sudo[117834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:52 compute-0 python3.9[117836]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:14:52 compute-0 sudo[117834]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:52 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 01 13:14:52 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 01 13:14:52 compute-0 sudo[117986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwafipmnfnjtvvuieiyazmvzlbcqfjfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324492.2345593-91-98887502393404/AnsiballZ_stat.py'
Oct 01 13:14:52 compute-0 sudo[117986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:52 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Oct 01 13:14:52 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Oct 01 13:14:52 compute-0 python3.9[117988]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:14:52 compute-0 sudo[117986]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:53 compute-0 ceph-mon[74802]: pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:53 compute-0 ceph-mon[74802]: 9.1d scrub starts
Oct 01 13:14:53 compute-0 ceph-mon[74802]: 9.1d scrub ok
Oct 01 13:14:53 compute-0 ceph-mon[74802]: 11.1c deep-scrub starts
Oct 01 13:14:53 compute-0 ceph-mon[74802]: 11.1c deep-scrub ok
Oct 01 13:14:53 compute-0 sudo[118064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atvqdzwwkxzxaylfpntilkmvqxtegztd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324492.2345593-91-98887502393404/AnsiballZ_file.py'
Oct 01 13:14:53 compute-0 sudo[118064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:53 compute-0 python3.9[118066]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:14:53 compute-0 sudo[118064]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:53 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 01 13:14:53 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 01 13:14:53 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 01 13:14:53 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 01 13:14:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:53 compute-0 sshd-session[118067]: Invalid user superman from 200.7.101.139 port 55950
Oct 01 13:14:54 compute-0 sshd-session[118067]: Received disconnect from 200.7.101.139 port 55950:11: Bye Bye [preauth]
Oct 01 13:14:54 compute-0 sshd-session[118067]: Disconnected from invalid user superman 200.7.101.139 port 55950 [preauth]
Oct 01 13:14:54 compute-0 ceph-mon[74802]: 9.b scrub starts
Oct 01 13:14:54 compute-0 ceph-mon[74802]: 9.b scrub ok
Oct 01 13:14:54 compute-0 ceph-mon[74802]: 11.8 scrub starts
Oct 01 13:14:54 compute-0 ceph-mon[74802]: 11.8 scrub ok
Oct 01 13:14:54 compute-0 sudo[118218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypnbnfxqrvnkwpygzswyolfhtkrrgulj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324493.621289-104-229151089165331/AnsiballZ_ini_file.py'
Oct 01 13:14:54 compute-0 sudo[118218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:54 compute-0 python3.9[118220]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:14:54 compute-0 sudo[118218]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:54 compute-0 sudo[118370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rypfdgeweaptqxfnmsavwphanajdiprd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324494.5651083-104-15814988964876/AnsiballZ_ini_file.py'
Oct 01 13:14:54 compute-0 sudo[118370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:55 compute-0 python3.9[118372]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:14:55 compute-0 ceph-mon[74802]: pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:55 compute-0 sudo[118370]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:14:55 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.5 deep-scrub starts
Oct 01 13:14:55 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.5 deep-scrub ok
Oct 01 13:14:55 compute-0 sudo[118522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vupexbikhbuvinvgpgtvgwdfjjxwaejt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324495.3691347-104-103850503842561/AnsiballZ_ini_file.py'
Oct 01 13:14:55 compute-0 sudo[118522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:55 compute-0 python3.9[118524]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:14:55 compute-0 sudo[118522]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:56 compute-0 ceph-mon[74802]: 9.5 deep-scrub starts
Oct 01 13:14:56 compute-0 ceph-mon[74802]: 9.5 deep-scrub ok
Oct 01 13:14:56 compute-0 sudo[118674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wohzepqrzbuegjyzyrmzfzbaokjqrgqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324496.0942636-104-252797539037512/AnsiballZ_ini_file.py'
Oct 01 13:14:56 compute-0 sudo[118674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:56 compute-0 python3.9[118676]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:14:56 compute-0 sudo[118674]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:14:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:14:57 compute-0 sudo[118826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skwrjlqzsxjoybkpfzrcrjmuammuahoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324497.059673-135-133438596098122/AnsiballZ_dnf.py'
Oct 01 13:14:57 compute-0 sudo[118826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:57 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Oct 01 13:14:57 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Oct 01 13:14:57 compute-0 python3.9[118828]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:14:57 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 01 13:14:57 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 01 13:14:57 compute-0 ceph-mon[74802]: pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:58 compute-0 ceph-mon[74802]: 9.1b scrub starts
Oct 01 13:14:58 compute-0 ceph-mon[74802]: 9.1b scrub ok
Oct 01 13:14:58 compute-0 ceph-mon[74802]: 8.12 scrub starts
Oct 01 13:14:58 compute-0 ceph-mon[74802]: 8.12 scrub ok
Oct 01 13:14:58 compute-0 ceph-mon[74802]: pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:58 compute-0 sudo[118826]: pam_unix(sudo:session): session closed for user root
Oct 01 13:14:59 compute-0 sudo[118979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uecmakxtyherpsxaeiinhniokmhrbtdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324499.2927258-146-232674484977790/AnsiballZ_setup.py'
Oct 01 13:14:59 compute-0 sudo[118979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:14:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:14:59 compute-0 python3.9[118981]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:14:59 compute-0 sudo[118979]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:00 compute-0 sudo[119133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcuxuaafhvgkypmbrrnshurhgoooxosk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324500.1456177-154-130771116134735/AnsiballZ_stat.py'
Oct 01 13:15:00 compute-0 sudo[119133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:00 compute-0 python3.9[119135]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:15:00 compute-0 sudo[119133]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:00 compute-0 ceph-mon[74802]: pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:01 compute-0 sudo[119285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjqgxbrrpgigvpswzaxtxypxlaweibyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324500.8796413-163-256782565477970/AnsiballZ_stat.py'
Oct 01 13:15:01 compute-0 sudo[119285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:01 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 01 13:15:01 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 01 13:15:01 compute-0 python3.9[119287]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:15:01 compute-0 sudo[119285]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.906866) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324501906963, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7304, "num_deletes": 251, "total_data_size": 9345268, "memory_usage": 9589216, "flush_reason": "Manual Compaction"}
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324501962074, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7555255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7442, "table_properties": {"data_size": 7528116, "index_size": 17808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 76671, "raw_average_key_size": 23, "raw_value_size": 7464414, "raw_average_value_size": 2265, "num_data_blocks": 780, "num_entries": 3295, "num_filter_entries": 3295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324080, "oldest_key_time": 1759324080, "file_creation_time": 1759324501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 55264 microseconds, and 21248 cpu microseconds.
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.962134) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7555255 bytes OK
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.962157) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.963878) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.963890) EVENT_LOG_v1 {"time_micros": 1759324501963886, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.963916) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9313389, prev total WAL file size 9313389, number of live WAL files 2.
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.965553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7378KB) 13(53KB) 8(1944B)]
Oct 01 13:15:01 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324501965617, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7612448, "oldest_snapshot_seqno": -1}
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3111 keys, 7567859 bytes, temperature: kUnknown
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324502028235, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7567859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7541091, "index_size": 17890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 74743, "raw_average_key_size": 24, "raw_value_size": 7478958, "raw_average_value_size": 2404, "num_data_blocks": 784, "num_entries": 3111, "num_filter_entries": 3111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759324501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:02.028543) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7567859 bytes
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:02.030523) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.2 rd, 120.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.3, 0.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3401, records dropped: 290 output_compression: NoCompression
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:02.030539) EVENT_LOG_v1 {"time_micros": 1759324502030531, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62797, "compaction_time_cpu_micros": 15297, "output_level": 6, "num_output_files": 1, "total_output_size": 7567859, "num_input_records": 3401, "num_output_records": 3111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324502032011, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324502032187, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324502032352, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 01 13:15:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:15:01.965491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:15:02 compute-0 sudo[119438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwoxulajonpzkfhdhhgbjcrfzweztzum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324501.6730971-173-229455037682448/AnsiballZ_service_facts.py'
Oct 01 13:15:02 compute-0 sudo[119438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:02 compute-0 python3.9[119440]: ansible-service_facts Invoked
Oct 01 13:15:02 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Oct 01 13:15:02 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Oct 01 13:15:02 compute-0 network[119457]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:15:02 compute-0 network[119458]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:15:02 compute-0 network[119459]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:15:02 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 01 13:15:02 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 01 13:15:02 compute-0 ceph-mon[74802]: 11.a scrub starts
Oct 01 13:15:02 compute-0 ceph-mon[74802]: 11.a scrub ok
Oct 01 13:15:02 compute-0 ceph-mon[74802]: pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:02 compute-0 ceph-mon[74802]: 8.1b scrub starts
Oct 01 13:15:02 compute-0 ceph-mon[74802]: 8.1b scrub ok
Oct 01 13:15:03 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Oct 01 13:15:03 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Oct 01 13:15:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:03 compute-0 ceph-mon[74802]: 11.c deep-scrub starts
Oct 01 13:15:03 compute-0 ceph-mon[74802]: 11.c deep-scrub ok
Oct 01 13:15:03 compute-0 ceph-mon[74802]: 11.1a deep-scrub starts
Oct 01 13:15:03 compute-0 ceph-mon[74802]: 11.1a deep-scrub ok
Oct 01 13:15:04 compute-0 ceph-mon[74802]: pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:05 compute-0 sudo[119438]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:05 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct 01 13:15:05 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct 01 13:15:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:05 compute-0 ceph-mon[74802]: 11.1f scrub starts
Oct 01 13:15:05 compute-0 ceph-mon[74802]: 11.1f scrub ok
Oct 01 13:15:06 compute-0 sudo[119745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbeqbsorpalhdryalqzhymckkjjzlotz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759324506.2163491-186-160786981574044/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759324506.2163491-186-160786981574044/args'
Oct 01 13:15:06 compute-0 sudo[119745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:06 compute-0 sudo[119745]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.12 deep-scrub starts
Oct 01 13:15:06 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.12 deep-scrub ok
Oct 01 13:15:07 compute-0 ceph-mon[74802]: pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:07 compute-0 ceph-mon[74802]: 11.12 deep-scrub starts
Oct 01 13:15:07 compute-0 ceph-mon[74802]: 11.12 deep-scrub ok
Oct 01 13:15:07 compute-0 sudo[119912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkxqxmgjdeeqcyednhcudbhqvkruxfbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324507.1033506-197-167250356346236/AnsiballZ_dnf.py'
Oct 01 13:15:07 compute-0 sudo[119912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:07 compute-0 python3.9[119914]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:15:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:08 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct 01 13:15:08 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct 01 13:15:08 compute-0 sudo[119912]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:09 compute-0 ceph-mon[74802]: pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:09 compute-0 ceph-mon[74802]: 11.13 scrub starts
Oct 01 13:15:09 compute-0 ceph-mon[74802]: 11.13 scrub ok
Oct 01 13:15:09 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct 01 13:15:09 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct 01 13:15:09 compute-0 sshd-session[119940]: Invalid user seekcy from 156.236.31.46 port 44168
Oct 01 13:15:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:09 compute-0 sshd-session[119940]: Received disconnect from 156.236.31.46 port 44168:11: Bye Bye [preauth]
Oct 01 13:15:09 compute-0 sshd-session[119940]: Disconnected from invalid user seekcy 156.236.31.46 port 44168 [preauth]
Oct 01 13:15:10 compute-0 sudo[120067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smsaceotqcavbawfpdbzatntgixawqhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324509.2954335-210-119127048518743/AnsiballZ_package_facts.py'
Oct 01 13:15:10 compute-0 sudo[120067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:10 compute-0 ceph-mon[74802]: 9.1 scrub starts
Oct 01 13:15:10 compute-0 ceph-mon[74802]: 9.1 scrub ok
Oct 01 13:15:10 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 01 13:15:10 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 01 13:15:10 compute-0 python3.9[120069]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 01 13:15:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:10 compute-0 sudo[120067]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:11 compute-0 ceph-mon[74802]: pgmap v316: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:11 compute-0 ceph-mon[74802]: 11.16 scrub starts
Oct 01 13:15:11 compute-0 ceph-mon[74802]: 11.16 scrub ok
Oct 01 13:15:11 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 01 13:15:11 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 01 13:15:11 compute-0 sudo[120219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omysvytgpzsjgupakwrbrpcmcjqalegv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324511.0454757-220-177098017889622/AnsiballZ_stat.py'
Oct 01 13:15:11 compute-0 sudo[120219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:11 compute-0 python3.9[120221]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:11 compute-0 sudo[120219]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:11 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 01 13:15:11 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 01 13:15:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:11 compute-0 sudo[120297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsqyvvaeuorcqdndswakoeixmemlygzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324511.0454757-220-177098017889622/AnsiballZ_file.py'
Oct 01 13:15:11 compute-0 sudo[120297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:12 compute-0 ceph-mon[74802]: 11.1d scrub starts
Oct 01 13:15:12 compute-0 ceph-mon[74802]: 11.1d scrub ok
Oct 01 13:15:12 compute-0 ceph-mon[74802]: 8.1c scrub starts
Oct 01 13:15:12 compute-0 ceph-mon[74802]: 8.1c scrub ok
Oct 01 13:15:12 compute-0 python3.9[120299]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:12 compute-0 sudo[120297]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:12 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 01 13:15:12 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 01 13:15:12 compute-0 sudo[120449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nesldxcghejagmipjqkilkrbvdclnzuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324512.3711429-232-19505466623110/AnsiballZ_stat.py'
Oct 01 13:15:12 compute-0 sudo[120449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:12 compute-0 python3.9[120451]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:12 compute-0 sudo[120449]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:13 compute-0 ceph-mon[74802]: pgmap v317: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:13 compute-0 ceph-mon[74802]: 10.b scrub starts
Oct 01 13:15:13 compute-0 ceph-mon[74802]: 10.b scrub ok
Oct 01 13:15:13 compute-0 sudo[120527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cviifonrosmvyawxzczokmefnxdzexuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324512.3711429-232-19505466623110/AnsiballZ_file.py'
Oct 01 13:15:13 compute-0 sudo[120527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:13 compute-0 python3.9[120529]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:13 compute-0 sudo[120527]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:14 compute-0 sudo[120679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tldgacrnqznftnstmphqjhqobmngrhum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324513.9039152-250-10954077974196/AnsiballZ_lineinfile.py'
Oct 01 13:15:14 compute-0 sudo[120679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:14 compute-0 python3.9[120681]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:14 compute-0 sudo[120679]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:15 compute-0 ceph-mon[74802]: pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:15 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 01 13:15:15 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 01 13:15:15 compute-0 sudo[120831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjptihhahaovfkrvnbnzgehgtdbsyage ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324515.0317907-265-126772712145501/AnsiballZ_setup.py'
Oct 01 13:15:15 compute-0 sudo[120831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:15 compute-0 python3.9[120833]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:15:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:15 compute-0 sudo[120831]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:16 compute-0 ceph-mon[74802]: 10.19 scrub starts
Oct 01 13:15:16 compute-0 ceph-mon[74802]: 10.19 scrub ok
Oct 01 13:15:16 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 01 13:15:16 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 01 13:15:16 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 01 13:15:16 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 01 13:15:16 compute-0 sudo[120915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzvfyfxlsrubkyuiqlcpcxfzlkjevlwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324515.0317907-265-126772712145501/AnsiballZ_systemd.py'
Oct 01 13:15:16 compute-0 sudo[120915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:16 compute-0 python3.9[120917]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:15:16 compute-0 sudo[120915]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:17 compute-0 ceph-mon[74802]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:17 compute-0 ceph-mon[74802]: 10.11 scrub starts
Oct 01 13:15:17 compute-0 ceph-mon[74802]: 10.11 scrub ok
Oct 01 13:15:17 compute-0 ceph-mon[74802]: 9.16 scrub starts
Oct 01 13:15:17 compute-0 ceph-mon[74802]: 9.16 scrub ok
Oct 01 13:15:17 compute-0 sshd-session[116547]: Connection closed by 192.168.122.30 port 52940
Oct 01 13:15:17 compute-0 sshd-session[116544]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:15:17 compute-0 systemd-logind[818]: Session 38 logged out. Waiting for processes to exit.
Oct 01 13:15:17 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Oct 01 13:15:17 compute-0 systemd[1]: session-38.scope: Consumed 24.805s CPU time.
Oct 01 13:15:17 compute-0 systemd-logind[818]: Removed session 38.
Oct 01 13:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:15:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:19 compute-0 ceph-mon[74802]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:19 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Oct 01 13:15:19 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Oct 01 13:15:20 compute-0 ceph-mon[74802]: 11.11 deep-scrub starts
Oct 01 13:15:20 compute-0 ceph-mon[74802]: 11.11 deep-scrub ok
Oct 01 13:15:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:21 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 01 13:15:21 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 01 13:15:21 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Oct 01 13:15:21 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Oct 01 13:15:21 compute-0 ceph-mon[74802]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:21 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 01 13:15:21 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 01 13:15:22 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 01 13:15:22 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 01 13:15:22 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Oct 01 13:15:22 compute-0 ceph-osd[88455]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Oct 01 13:15:22 compute-0 ceph-mon[74802]: 10.10 scrub starts
Oct 01 13:15:22 compute-0 ceph-mon[74802]: 10.10 scrub ok
Oct 01 13:15:22 compute-0 ceph-mon[74802]: 9.1c scrub starts
Oct 01 13:15:22 compute-0 ceph-mon[74802]: 9.1c scrub ok
Oct 01 13:15:22 compute-0 ceph-mon[74802]: 8.11 scrub starts
Oct 01 13:15:22 compute-0 ceph-mon[74802]: 8.11 scrub ok
Oct 01 13:15:22 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct 01 13:15:22 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct 01 13:15:23 compute-0 sshd-session[120946]: Accepted publickey for zuul from 192.168.122.30 port 56024 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:15:23 compute-0 systemd-logind[818]: New session 39 of user zuul.
Oct 01 13:15:23 compute-0 systemd[1]: Started Session 39 of User zuul.
Oct 01 13:15:23 compute-0 sshd-session[120946]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:15:23 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 01 13:15:23 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 01 13:15:23 compute-0 ceph-mon[74802]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:23 compute-0 ceph-mon[74802]: 10.6 scrub starts
Oct 01 13:15:23 compute-0 ceph-mon[74802]: 10.6 scrub ok
Oct 01 13:15:23 compute-0 ceph-mon[74802]: 9.1e deep-scrub starts
Oct 01 13:15:23 compute-0 ceph-mon[74802]: 9.1e deep-scrub ok
Oct 01 13:15:23 compute-0 sshd-session[120944]: Invalid user yassine from 27.254.137.144 port 45460
Oct 01 13:15:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:23 compute-0 sudo[121099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iapkekltwimiukxzhwrdcadnokhrifqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324523.257201-22-217935227888047/AnsiballZ_file.py'
Oct 01 13:15:23 compute-0 sudo[121099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:24 compute-0 sshd-session[120944]: Received disconnect from 27.254.137.144 port 45460:11: Bye Bye [preauth]
Oct 01 13:15:24 compute-0 sshd-session[120944]: Disconnected from invalid user yassine 27.254.137.144 port 45460 [preauth]
Oct 01 13:15:24 compute-0 python3.9[121101]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:24 compute-0 sudo[121099]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:24 compute-0 ceph-mon[74802]: 11.b scrub starts
Oct 01 13:15:24 compute-0 ceph-mon[74802]: 11.b scrub ok
Oct 01 13:15:24 compute-0 ceph-mon[74802]: 10.13 scrub starts
Oct 01 13:15:24 compute-0 ceph-mon[74802]: 10.13 scrub ok
Oct 01 13:15:24 compute-0 ceph-mon[74802]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:24 compute-0 sudo[121251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijjscxvdpvvnmioodfuxiogjtvaxdfbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324524.3005865-34-68528903288509/AnsiballZ_stat.py'
Oct 01 13:15:24 compute-0 sudo[121251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:25 compute-0 python3.9[121253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:25 compute-0 sudo[121251]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:25 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct 01 13:15:25 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct 01 13:15:25 compute-0 sudo[121329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smnodwtwapfgkeijajzjbxnanyfgjiqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324524.3005865-34-68528903288509/AnsiballZ_file.py'
Oct 01 13:15:25 compute-0 sudo[121329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:25 compute-0 python3.9[121331]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:25 compute-0 sudo[121329]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:25 compute-0 sshd-session[120949]: Connection closed by 192.168.122.30 port 56024
Oct 01 13:15:25 compute-0 sshd-session[120946]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:15:25 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Oct 01 13:15:25 compute-0 systemd[1]: session-39.scope: Consumed 1.939s CPU time.
Oct 01 13:15:25 compute-0 systemd-logind[818]: Session 39 logged out. Waiting for processes to exit.
Oct 01 13:15:25 compute-0 systemd-logind[818]: Removed session 39.
Oct 01 13:15:25 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Oct 01 13:15:25 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Oct 01 13:15:26 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 01 13:15:26 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 01 13:15:26 compute-0 ceph-mon[74802]: 10.2 scrub starts
Oct 01 13:15:26 compute-0 ceph-mon[74802]: 10.2 scrub ok
Oct 01 13:15:26 compute-0 ceph-mon[74802]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:26 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct 01 13:15:27 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct 01 13:15:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:28 compute-0 ceph-mon[74802]: 11.1e deep-scrub starts
Oct 01 13:15:28 compute-0 ceph-mon[74802]: 11.1e deep-scrub ok
Oct 01 13:15:28 compute-0 ceph-mon[74802]: 10.f scrub starts
Oct 01 13:15:28 compute-0 ceph-mon[74802]: 10.f scrub ok
Oct 01 13:15:28 compute-0 ceph-mon[74802]: 11.18 scrub starts
Oct 01 13:15:28 compute-0 ceph-mon[74802]: 11.18 scrub ok
Oct 01 13:15:29 compute-0 ceph-mon[74802]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:30 compute-0 sshd-session[121357]: Accepted publickey for zuul from 192.168.122.30 port 46642 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:15:30 compute-0 systemd-logind[818]: New session 40 of user zuul.
Oct 01 13:15:30 compute-0 systemd[1]: Started Session 40 of User zuul.
Oct 01 13:15:30 compute-0 sshd-session[121357]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:15:30 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct 01 13:15:30 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct 01 13:15:31 compute-0 ceph-mon[74802]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:31 compute-0 python3.9[121510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:15:32 compute-0 ceph-mon[74802]: 9.7 scrub starts
Oct 01 13:15:32 compute-0 ceph-mon[74802]: 9.7 scrub ok
Oct 01 13:15:32 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 01 13:15:32 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 01 13:15:33 compute-0 sudo[121664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wofusfyleneyrmegfkzkpafohjqmduma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324532.4721937-33-200792699918970/AnsiballZ_file.py'
Oct 01 13:15:33 compute-0 sudo[121664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:33 compute-0 ceph-mon[74802]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:33 compute-0 python3.9[121666]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:33 compute-0 sudo[121664]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:33 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 01 13:15:33 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 01 13:15:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:33 compute-0 sudo[121839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdcexoshlookacntqifouinmvocbocif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324533.3988547-41-173536527498067/AnsiballZ_stat.py'
Oct 01 13:15:33 compute-0 sudo[121839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:34 compute-0 python3.9[121841]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:34 compute-0 sudo[121839]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:34 compute-0 ceph-mon[74802]: 9.f scrub starts
Oct 01 13:15:34 compute-0 ceph-mon[74802]: 9.f scrub ok
Oct 01 13:15:34 compute-0 ceph-mon[74802]: 10.1a scrub starts
Oct 01 13:15:34 compute-0 ceph-mon[74802]: 10.1a scrub ok
Oct 01 13:15:34 compute-0 sudo[121917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxxeoolsnveuudpnuzhkkifroiaonwyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324533.3988547-41-173536527498067/AnsiballZ_file.py'
Oct 01 13:15:34 compute-0 sudo[121917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:34 compute-0 python3.9[121919]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.vaqch02l recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:34 compute-0 sudo[121917]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:35 compute-0 ceph-mon[74802]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:35 compute-0 sudo[122069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbeygckjmdhykopwntexvirlwqpaoaqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324534.90821-61-244396754385572/AnsiballZ_stat.py'
Oct 01 13:15:35 compute-0 sudo[122069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:35 compute-0 python3.9[122071]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:35 compute-0 sudo[122069]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:35 compute-0 sudo[122147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbbtghcpxkazpfjoasjkamgigdnkcywg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324534.90821-61-244396754385572/AnsiballZ_file.py'
Oct 01 13:15:35 compute-0 sudo[122147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:35 compute-0 python3.9[122149]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=._twqt4dg recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:35 compute-0 sudo[122147]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:36 compute-0 sudo[122299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-porxdrmbidzxghpdlmyhekcysuiyrzgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324536.1292355-74-27171398955509/AnsiballZ_file.py'
Oct 01 13:15:36 compute-0 sudo[122299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:36 compute-0 python3.9[122301]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:15:36 compute-0 sudo[122299]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:37 compute-0 sudo[122451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqxrxbmpodtqsppexzfldyjqtlkklsov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324536.8735793-82-137487439936835/AnsiballZ_stat.py'
Oct 01 13:15:37 compute-0 ceph-mon[74802]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:37 compute-0 sudo[122451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:37 compute-0 python3.9[122453]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:37 compute-0 sudo[122451]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:37 compute-0 sudo[122529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykqyxjzrnevwlufegmobpfuohyjoburx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324536.8735793-82-137487439936835/AnsiballZ_file.py'
Oct 01 13:15:37 compute-0 sudo[122529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:37 compute-0 python3.9[122531]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:15:37 compute-0 sudo[122529]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:38 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 01 13:15:38 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 01 13:15:38 compute-0 sudo[122681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcifytfjigcyhabrozkseaoygdfhvwxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324538.0534453-82-42919183130827/AnsiballZ_stat.py'
Oct 01 13:15:38 compute-0 sudo[122681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:38 compute-0 python3.9[122683]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:38 compute-0 sudo[122681]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:38 compute-0 sudo[122759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flwpffyzyrfvcfwheuuqurxghvrmvxhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324538.0534453-82-42919183130827/AnsiballZ_file.py'
Oct 01 13:15:38 compute-0 sudo[122759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:39 compute-0 sudo[122762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:15:39 compute-0 sudo[122762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:39 compute-0 sudo[122762]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:39 compute-0 sudo[122787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:15:39 compute-0 sudo[122787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:39 compute-0 sudo[122787]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:39 compute-0 python3.9[122761]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:15:39 compute-0 sudo[122759]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:39 compute-0 ceph-mon[74802]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:39 compute-0 ceph-mon[74802]: 10.12 scrub starts
Oct 01 13:15:39 compute-0 ceph-mon[74802]: 10.12 scrub ok
Oct 01 13:15:39 compute-0 sudo[122812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:15:39 compute-0 sudo[122812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:39 compute-0 sudo[122812]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:39 compute-0 sudo[122846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:15:39 compute-0 sudo[122846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:39 compute-0 sudo[123025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpvazbebtlptefbysaxfqbkqrisnnxoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324539.3451755-105-268544656737189/AnsiballZ_file.py'
Oct 01 13:15:39 compute-0 sudo[123025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:39 compute-0 sudo[122846]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:15:39 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:15:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:15:39 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:15:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:15:39 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:15:39 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ebec4e34-b19e-4173-b704-3ed08fb45147 does not exist
Oct 01 13:15:39 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 399d5a93-519c-4075-88f7-a0b430c9659d does not exist
Oct 01 13:15:39 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 87ede350-750d-415b-9980-f9156065a44d does not exist
Oct 01 13:15:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:15:39 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:15:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:15:39 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:15:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:15:39 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:15:39 compute-0 python3.9[123029]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:39 compute-0 sudo[123025]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:39 compute-0 sudo[123045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:15:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:39 compute-0 sudo[123045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:39 compute-0 sudo[123045]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:39 compute-0 sudo[123071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:15:39 compute-0 sudo[123071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:39 compute-0 sudo[123071]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:39 compute-0 sudo[123119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:15:39 compute-0 sudo[123119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:39 compute-0 sudo[123119]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:40 compute-0 sudo[123167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:15:40 compute-0 sudo[123167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:15:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:15:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:15:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:15:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:15:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:15:40 compute-0 sudo[123327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynpvywttdsbvdzzjemqqazywbwxzvzmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324539.9644895-113-184449578338939/AnsiballZ_stat.py'
Oct 01 13:15:40 compute-0 sudo[123327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:40 compute-0 podman[123336]: 2025-10-01 13:15:40.354460712 +0000 UTC m=+0.055115617 container create e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:15:40 compute-0 systemd[1]: Started libpod-conmon-e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c.scope.
Oct 01 13:15:40 compute-0 podman[123336]: 2025-10-01 13:15:40.323833749 +0000 UTC m=+0.024488704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:15:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:15:40 compute-0 python3.9[123334]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:40 compute-0 podman[123336]: 2025-10-01 13:15:40.468881801 +0000 UTC m=+0.169536756 container init e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:15:40 compute-0 podman[123336]: 2025-10-01 13:15:40.477676929 +0000 UTC m=+0.178331834 container start e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:15:40 compute-0 podman[123336]: 2025-10-01 13:15:40.483085736 +0000 UTC m=+0.183740641 container attach e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:15:40 compute-0 zealous_shamir[123353]: 167 167
Oct 01 13:15:40 compute-0 systemd[1]: libpod-e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c.scope: Deactivated successfully.
Oct 01 13:15:40 compute-0 podman[123336]: 2025-10-01 13:15:40.48994126 +0000 UTC m=+0.190596165 container died e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:15:40 compute-0 sudo[123327]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceee5b449e57be8cc8352b13bf0382d2ac8fa6e2436b35f6ecf21edbd82fedf7-merged.mount: Deactivated successfully.
Oct 01 13:15:40 compute-0 podman[123336]: 2025-10-01 13:15:40.570783089 +0000 UTC m=+0.271437964 container remove e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:15:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:40 compute-0 systemd[1]: libpod-conmon-e9aca52abb30712ffc7f99eae20a3db77c3694d5dfa2b815e1bd4888871dcc4c.scope: Deactivated successfully.
Oct 01 13:15:40 compute-0 podman[123426]: 2025-10-01 13:15:40.764821066 +0000 UTC m=+0.054342822 container create e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:15:40 compute-0 sudo[123464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmqqcmghcmojveewdtniepmhrqjqzare ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324539.9644895-113-184449578338939/AnsiballZ_file.py'
Oct 01 13:15:40 compute-0 sudo[123464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:40 compute-0 systemd[1]: Started libpod-conmon-e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c.scope.
Oct 01 13:15:40 compute-0 podman[123426]: 2025-10-01 13:15:40.739034361 +0000 UTC m=+0.028556197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:15:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:40 compute-0 podman[123426]: 2025-10-01 13:15:40.85650165 +0000 UTC m=+0.146023446 container init e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:15:40 compute-0 podman[123426]: 2025-10-01 13:15:40.870179098 +0000 UTC m=+0.159700894 container start e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:15:40 compute-0 podman[123426]: 2025-10-01 13:15:40.874230711 +0000 UTC m=+0.163752497 container attach e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:15:40 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 01 13:15:40 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 01 13:15:41 compute-0 python3.9[123466]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:41 compute-0 sudo[123464]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:41 compute-0 ceph-mon[74802]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:41 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct 01 13:15:41 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct 01 13:15:41 compute-0 sudo[123629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zllepxysmiwmfzhepynmncbxuwwepmdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324541.2671807-125-214100269294030/AnsiballZ_stat.py'
Oct 01 13:15:41 compute-0 sudo[123629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:41 compute-0 python3.9[123633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:41 compute-0 sudo[123629]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:41 compute-0 lucid_williams[123469]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:15:41 compute-0 lucid_williams[123469]: --> relative data size: 1.0
Oct 01 13:15:41 compute-0 lucid_williams[123469]: --> All data devices are unavailable
Oct 01 13:15:42 compute-0 systemd[1]: libpod-e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c.scope: Deactivated successfully.
Oct 01 13:15:42 compute-0 systemd[1]: libpod-e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c.scope: Consumed 1.064s CPU time.
Oct 01 13:15:42 compute-0 podman[123426]: 2025-10-01 13:15:42.005080771 +0000 UTC m=+1.294602517 container died e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d3fb7dc4d950300637365688c4bc37543de7afbd7e4693aa9143b7930986e79-merged.mount: Deactivated successfully.
Oct 01 13:15:42 compute-0 podman[123426]: 2025-10-01 13:15:42.067325831 +0000 UTC m=+1.356847587 container remove e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:15:42 compute-0 systemd[1]: libpod-conmon-e8489f0d4575e644dfd7b6c1828233dd27c2139b9f55d9b8767d06fee5d7de8c.scope: Deactivated successfully.
Oct 01 13:15:42 compute-0 sudo[123167]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:42 compute-0 sudo[123740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yebsnboqiicooirsxtjjepveoffqaqrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324541.2671807-125-214100269294030/AnsiballZ_file.py'
Oct 01 13:15:42 compute-0 sudo[123740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:42 compute-0 sudo[123739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:15:42 compute-0 sudo[123739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:42 compute-0 sudo[123739]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:42 compute-0 ceph-mon[74802]: 9.17 scrub starts
Oct 01 13:15:42 compute-0 ceph-mon[74802]: 9.17 scrub ok
Oct 01 13:15:42 compute-0 ceph-mon[74802]: 10.14 scrub starts
Oct 01 13:15:42 compute-0 ceph-mon[74802]: 10.14 scrub ok
Oct 01 13:15:42 compute-0 sudo[123767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:15:42 compute-0 sudo[123767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:42 compute-0 sudo[123767]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:42 compute-0 sudo[123792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:15:42 compute-0 sudo[123792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:42 compute-0 sudo[123792]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:42 compute-0 python3.9[123759]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:42 compute-0 sudo[123740]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:42 compute-0 sudo[123817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:15:42 compute-0 sudo[123817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:42 compute-0 podman[123959]: 2025-10-01 13:15:42.811680588 +0000 UTC m=+0.051200409 container create 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:15:42 compute-0 systemd[1]: Started libpod-conmon-98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f.scope.
Oct 01 13:15:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:15:42 compute-0 podman[123959]: 2025-10-01 13:15:42.785790059 +0000 UTC m=+0.025309930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:15:42 compute-0 podman[123959]: 2025-10-01 13:15:42.888868026 +0000 UTC m=+0.128387837 container init 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:15:42 compute-0 podman[123959]: 2025-10-01 13:15:42.896201347 +0000 UTC m=+0.135721168 container start 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:15:42 compute-0 podman[123959]: 2025-10-01 13:15:42.899697781 +0000 UTC m=+0.139217592 container attach 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:15:42 compute-0 systemd[1]: libpod-98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f.scope: Deactivated successfully.
Oct 01 13:15:42 compute-0 elegant_hodgkin[123975]: 167 167
Oct 01 13:15:42 compute-0 conmon[123975]: conmon 98f222d6fe08b0d953d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f.scope/container/memory.events
Oct 01 13:15:42 compute-0 podman[123959]: 2025-10-01 13:15:42.905074308 +0000 UTC m=+0.144594129 container died 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1597fefdad16fc197eed0ee43d6eeb74e93b478a331a0388f402c3d8b4fb8ece-merged.mount: Deactivated successfully.
Oct 01 13:15:42 compute-0 podman[123959]: 2025-10-01 13:15:42.945935126 +0000 UTC m=+0.185454917 container remove 98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:15:42 compute-0 systemd[1]: libpod-conmon-98f222d6fe08b0d953d27d8dcf40da43f25fbb95eebc3744f30804459b6f308f.scope: Deactivated successfully.
Oct 01 13:15:43 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 01 13:15:43 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 01 13:15:43 compute-0 podman[124000]: 2025-10-01 13:15:43.103021243 +0000 UTC m=+0.041446949 container create 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:15:43 compute-0 systemd[1]: Started libpod-conmon-27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd.scope.
Oct 01 13:15:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:15:43 compute-0 podman[124000]: 2025-10-01 13:15:43.084753284 +0000 UTC m=+0.023179000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:43 compute-0 podman[124000]: 2025-10-01 13:15:43.21222189 +0000 UTC m=+0.150647646 container init 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:15:43 compute-0 podman[124000]: 2025-10-01 13:15:43.22838809 +0000 UTC m=+0.166813796 container start 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:15:43 compute-0 podman[124000]: 2025-10-01 13:15:43.233130385 +0000 UTC m=+0.171556131 container attach 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:15:43 compute-0 ceph-mon[74802]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:43 compute-0 sudo[124095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzqdhgssywufubsxvtmswyvrkozdmcen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324542.625201-137-130145880487999/AnsiballZ_systemd.py'
Oct 01 13:15:43 compute-0 sudo[124095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:43 compute-0 python3.9[124097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:15:43 compute-0 systemd[1]: Reloading.
Oct 01 13:15:43 compute-0 systemd-rc-local-generator[124123]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:15:43 compute-0 systemd-sysv-generator[124126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:15:43 compute-0 sshd-session[123993]: Received disconnect from 80.253.31.232 port 52280:11: Bye Bye [preauth]
Oct 01 13:15:43 compute-0 sshd-session[123993]: Disconnected from authenticating user root 80.253.31.232 port 52280 [preauth]
Oct 01 13:15:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:43 compute-0 sudo[124095]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]: {
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:     "0": [
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:         {
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "devices": [
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "/dev/loop3"
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             ],
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_name": "ceph_lv0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_size": "21470642176",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "name": "ceph_lv0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "tags": {
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.cluster_name": "ceph",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.crush_device_class": "",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.encrypted": "0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.osd_id": "0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.type": "block",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.vdo": "0"
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             },
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "type": "block",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "vg_name": "ceph_vg0"
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:         }
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:     ],
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:     "1": [
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:         {
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "devices": [
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "/dev/loop4"
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             ],
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_name": "ceph_lv1",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_size": "21470642176",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "name": "ceph_lv1",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "tags": {
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.cluster_name": "ceph",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.crush_device_class": "",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.encrypted": "0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.osd_id": "1",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.type": "block",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.vdo": "0"
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             },
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "type": "block",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "vg_name": "ceph_vg1"
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:         }
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:     ],
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:     "2": [
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:         {
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "devices": [
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "/dev/loop5"
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             ],
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_name": "ceph_lv2",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_size": "21470642176",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "name": "ceph_lv2",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "tags": {
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.cluster_name": "ceph",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.crush_device_class": "",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.encrypted": "0",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.osd_id": "2",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.type": "block",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:                 "ceph.vdo": "0"
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             },
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "type": "block",
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:             "vg_name": "ceph_vg2"
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:         }
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]:     ]
Oct 01 13:15:44 compute-0 sleepy_babbage[124058]: }
Oct 01 13:15:44 compute-0 systemd[1]: libpod-27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd.scope: Deactivated successfully.
Oct 01 13:15:44 compute-0 podman[124000]: 2025-10-01 13:15:44.04881903 +0000 UTC m=+0.987244726 container died 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-65fb14c67e51134263e03485d8b750317d8612151057a3c82303f8774a8e41b3-merged.mount: Deactivated successfully.
Oct 01 13:15:44 compute-0 podman[124000]: 2025-10-01 13:15:44.127397204 +0000 UTC m=+1.065822910 container remove 27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:15:44 compute-0 systemd[1]: libpod-conmon-27666e7caa6e5f70ae7bdcdd7ff586a157271918bd2686c90ef9aaf5624958dd.scope: Deactivated successfully.
Oct 01 13:15:44 compute-0 sudo[123817]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:44 compute-0 sudo[124229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:15:44 compute-0 sudo[124229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:44 compute-0 sudo[124229]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:44 compute-0 ceph-mon[74802]: 9.8 scrub starts
Oct 01 13:15:44 compute-0 ceph-mon[74802]: 9.8 scrub ok
Oct 01 13:15:44 compute-0 sudo[124277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:15:44 compute-0 sudo[124277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:44 compute-0 sudo[124277]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:44 compute-0 sudo[124375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oypuwkxxetfcvlhxocaybgawsieuormy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324544.1006453-145-37960553834487/AnsiballZ_stat.py'
Oct 01 13:15:44 compute-0 sudo[124375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:44 compute-0 sudo[124327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:15:44 compute-0 sudo[124327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:44 compute-0 sudo[124327]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:44 compute-0 sudo[124380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:15:44 compute-0 sudo[124380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:44 compute-0 python3.9[124378]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:44 compute-0 sudo[124375]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:44 compute-0 podman[124471]: 2025-10-01 13:15:44.822643103 +0000 UTC m=+0.054833588 container create 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:15:44 compute-0 systemd[1]: Started libpod-conmon-56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511.scope.
Oct 01 13:15:44 compute-0 podman[124471]: 2025-10-01 13:15:44.795642228 +0000 UTC m=+0.027832753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:15:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:15:44 compute-0 podman[124471]: 2025-10-01 13:15:44.913439827 +0000 UTC m=+0.145630352 container init 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:15:44 compute-0 podman[124471]: 2025-10-01 13:15:44.920394895 +0000 UTC m=+0.152585370 container start 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:15:44 compute-0 podman[124471]: 2025-10-01 13:15:44.925769061 +0000 UTC m=+0.157959536 container attach 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:15:44 compute-0 pensive_mccarthy[124512]: 167 167
Oct 01 13:15:44 compute-0 systemd[1]: libpod-56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511.scope: Deactivated successfully.
Oct 01 13:15:44 compute-0 podman[124471]: 2025-10-01 13:15:44.928115948 +0000 UTC m=+0.160306453 container died 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:15:44 compute-0 sudo[124544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxbrjbebulfscrefqtxezcqqnvejqqkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324544.1006453-145-37960553834487/AnsiballZ_file.py'
Oct 01 13:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed95a2ebb8b88619d8a3dcd87c5d5d1424fa327efb97dd7701c02af255555d67-merged.mount: Deactivated successfully.
Oct 01 13:15:44 compute-0 sudo[124544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:44 compute-0 podman[124471]: 2025-10-01 13:15:44.976042158 +0000 UTC m=+0.208232633 container remove 56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:15:45 compute-0 systemd[1]: libpod-conmon-56e2bdeceb8ebbf547964c72a1eb9b30627e7bda6025d15157574df6185b4511.scope: Deactivated successfully.
Oct 01 13:15:45 compute-0 python3.9[124552]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:45 compute-0 sudo[124544]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:45 compute-0 podman[124564]: 2025-10-01 13:15:45.231148707 +0000 UTC m=+0.061680252 container create dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:15:45 compute-0 ceph-mon[74802]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:45 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 01 13:15:45 compute-0 systemd[1]: Started libpod-conmon-dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e.scope.
Oct 01 13:15:45 compute-0 podman[124564]: 2025-10-01 13:15:45.20804923 +0000 UTC m=+0.038580835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:15:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:45 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 01 13:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:15:45 compute-0 podman[124564]: 2025-10-01 13:15:45.322388855 +0000 UTC m=+0.152920470 container init dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:15:45 compute-0 podman[124564]: 2025-10-01 13:15:45.331809044 +0000 UTC m=+0.162340609 container start dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:15:45 compute-0 podman[124564]: 2025-10-01 13:15:45.334885025 +0000 UTC m=+0.165416620 container attach dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:15:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:45 compute-0 sudo[124734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekcynnpjrlgrtpzxhvdxwyudohsxkjhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324545.365115-157-98235356161952/AnsiballZ_stat.py'
Oct 01 13:15:45 compute-0 sudo[124734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:45 compute-0 python3.9[124736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:45 compute-0 sudo[124734]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:46 compute-0 sudo[124832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpayznrnfpnyijwwtacrnulzaapswjga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324545.365115-157-98235356161952/AnsiballZ_file.py'
Oct 01 13:15:46 compute-0 sudo[124832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:46 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Oct 01 13:15:46 compute-0 ceph-osd[89484]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]: {
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "osd_id": 0,
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "type": "bluestore"
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:     },
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "osd_id": 2,
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "type": "bluestore"
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:     },
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "osd_id": 1,
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:         "type": "bluestore"
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]:     }
Oct 01 13:15:46 compute-0 quirky_blackburn[124604]: }
Oct 01 13:15:46 compute-0 systemd[1]: libpod-dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e.scope: Deactivated successfully.
Oct 01 13:15:46 compute-0 conmon[124604]: conmon dfeab916944186e050d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e.scope/container/memory.events
Oct 01 13:15:46 compute-0 podman[124564]: 2025-10-01 13:15:46.321711326 +0000 UTC m=+1.152242921 container died dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:15:46 compute-0 ceph-mon[74802]: 9.15 scrub starts
Oct 01 13:15:46 compute-0 ceph-mon[74802]: 9.15 scrub ok
Oct 01 13:15:46 compute-0 python3.9[124837]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:46 compute-0 sudo[124832]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f28d1d65621e57130b4515ea3b528677f78f3cb464db3275c6bd57d7397b26e3-merged.mount: Deactivated successfully.
Oct 01 13:15:46 compute-0 podman[124564]: 2025-10-01 13:15:46.591247767 +0000 UTC m=+1.421779322 container remove dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 13:15:46 compute-0 systemd[1]: libpod-conmon-dfeab916944186e050d03485974ca268fc8cf535732eedd49937aba63b92da9e.scope: Deactivated successfully.
Oct 01 13:15:46 compute-0 sudo[124380]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:15:46 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:15:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:15:46 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:15:46 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fe0619f7-482f-4d5c-b82c-c6f749800991 does not exist
Oct 01 13:15:46 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 95d2b674-2df3-4c45-8028-118dba3b6622 does not exist
Oct 01 13:15:46 compute-0 sudo[124916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:15:46 compute-0 sudo[124916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:46 compute-0 sudo[124916]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:46 compute-0 sudo[124956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:15:46 compute-0 sudo[124956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:15:46 compute-0 sudo[124956]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:47 compute-0 sudo[125054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnafzkhudxopqwttkijbqqwcjoptfmyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324546.6257238-169-175481711342842/AnsiballZ_systemd.py'
Oct 01 13:15:47 compute-0 sudo[125054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:47 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 01 13:15:47 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 01 13:15:47 compute-0 python3.9[125056]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:15:47 compute-0 systemd[1]: Reloading.
Oct 01 13:15:47 compute-0 systemd-sysv-generator[125085]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:15:47 compute-0 systemd-rc-local-generator[125081]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:15:47 compute-0 ceph-mon[74802]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:47 compute-0 ceph-mon[74802]: 9.1f scrub starts
Oct 01 13:15:47 compute-0 ceph-mon[74802]: 9.1f scrub ok
Oct 01 13:15:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:15:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:15:47 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 13:15:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 13:15:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 13:15:47 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 13:15:47 compute-0 sudo[125054]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:15:47
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.log']
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:15:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:47 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Oct 01 13:15:48 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Oct 01 13:15:48 compute-0 python3.9[125248]: ansible-ansible.builtin.service_facts Invoked
Oct 01 13:15:48 compute-0 network[125265]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:15:48 compute-0 network[125266]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:15:48 compute-0 network[125267]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:15:48 compute-0 ceph-mon[74802]: 9.6 scrub starts
Oct 01 13:15:48 compute-0 ceph-mon[74802]: 9.6 scrub ok
Oct 01 13:15:49 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 01 13:15:49 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 01 13:15:49 compute-0 ceph-mon[74802]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:49 compute-0 ceph-mon[74802]: 9.e deep-scrub starts
Oct 01 13:15:49 compute-0 ceph-mon[74802]: 9.e deep-scrub ok
Oct 01 13:15:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:50 compute-0 ceph-mon[74802]: 9.18 scrub starts
Oct 01 13:15:50 compute-0 ceph-mon[74802]: 9.18 scrub ok
Oct 01 13:15:50 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Oct 01 13:15:51 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Oct 01 13:15:51 compute-0 ceph-mon[74802]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:52 compute-0 ceph-mon[74802]: 9.c deep-scrub starts
Oct 01 13:15:52 compute-0 ceph-mon[74802]: 9.c deep-scrub ok
Oct 01 13:15:52 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 01 13:15:52 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 01 13:15:53 compute-0 sudo[125530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paphkijpkzwpmjfdibtkfozsqhoelidf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324553.219813-195-48739472409663/AnsiballZ_stat.py'
Oct 01 13:15:53 compute-0 sudo[125530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:53 compute-0 python3.9[125532]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:53 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct 01 13:15:53 compute-0 sudo[125530]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:54 compute-0 ceph-osd[90500]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct 01 13:15:54 compute-0 ceph-mon[74802]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:54 compute-0 sudo[125608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlagupgnmhobkqczdtkfrmjocqbeffut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324553.219813-195-48739472409663/AnsiballZ_file.py'
Oct 01 13:15:54 compute-0 sudo[125608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:54 compute-0 python3.9[125610]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:54 compute-0 sudo[125608]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:55 compute-0 ceph-mon[74802]: 9.13 scrub starts
Oct 01 13:15:55 compute-0 ceph-mon[74802]: 9.13 scrub ok
Oct 01 13:15:55 compute-0 ceph-mon[74802]: pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:55 compute-0 ceph-mon[74802]: 9.19 scrub starts
Oct 01 13:15:55 compute-0 ceph-mon[74802]: 9.19 scrub ok
Oct 01 13:15:55 compute-0 sudo[125760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwatfnfoxqshrfpmtvbjhsjutqjjyaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324554.8010385-208-222787861212904/AnsiballZ_file.py'
Oct 01 13:15:55 compute-0 sudo[125760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:55 compute-0 python3.9[125762]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:55 compute-0 sudo[125760]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:15:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:56 compute-0 sudo[125912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ublotannkmuddvijksbpavmqodglnyar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324555.5321977-216-25140119039887/AnsiballZ_stat.py'
Oct 01 13:15:56 compute-0 sudo[125912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:56 compute-0 python3.9[125914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:56 compute-0 sudo[125912]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:56 compute-0 sudo[125990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lopupkpcdhcxlnlgbtmifosgvxzajzei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324555.5321977-216-25140119039887/AnsiballZ_file.py'
Oct 01 13:15:56 compute-0 sudo[125990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:56 compute-0 python3.9[125992]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:56 compute-0 sudo[125990]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:15:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:15:57 compute-0 ceph-mon[74802]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:57 compute-0 sudo[126142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-decyqyvgabgjikohnwiyyjfacwdcbdpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324557.0964465-231-110007175231443/AnsiballZ_timezone.py'
Oct 01 13:15:57 compute-0 sudo[126142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:57 compute-0 python3.9[126144]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 01 13:15:57 compute-0 systemd[1]: Starting Time & Date Service...
Oct 01 13:15:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:57 compute-0 systemd[1]: Started Time & Date Service.
Oct 01 13:15:57 compute-0 sudo[126142]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:58 compute-0 sudo[126298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tanuywbomxbjckrulvtziyirrlctsqyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324558.173024-240-234508223409845/AnsiballZ_file.py'
Oct 01 13:15:58 compute-0 sudo[126298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:58 compute-0 python3.9[126300]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:15:58 compute-0 sudo[126298]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:59 compute-0 sudo[126450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wduzamzfmozysxrhddyjbmiloaxijicw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324558.8677356-248-159716860199307/AnsiballZ_stat.py'
Oct 01 13:15:59 compute-0 sudo[126450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:59 compute-0 ceph-mon[74802]: pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:59 compute-0 python3.9[126452]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:15:59 compute-0 sudo[126450]: pam_unix(sudo:session): session closed for user root
Oct 01 13:15:59 compute-0 sudo[126528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgfyefhpymaywlknzdulomulupkftnip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324558.8677356-248-159716860199307/AnsiballZ_file.py'
Oct 01 13:15:59 compute-0 sudo[126528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:15:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:15:59 compute-0 python3.9[126530]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:00 compute-0 sudo[126528]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:00 compute-0 sudo[126680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozuhlnknnuwqatreyrqyhxffukmwgxtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324560.1643689-260-45279935154155/AnsiballZ_stat.py'
Oct 01 13:16:00 compute-0 sudo[126680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:00 compute-0 python3.9[126682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:16:00 compute-0 sudo[126680]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:01 compute-0 sudo[126758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jduhkyyojbckjckecsofcpfuhbipdjgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324560.1643689-260-45279935154155/AnsiballZ_file.py'
Oct 01 13:16:01 compute-0 sudo[126758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:01 compute-0 python3.9[126760]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5ddyw9zy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:01 compute-0 sudo[126758]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:01 compute-0 ceph-mon[74802]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:01 compute-0 sudo[126910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saamdociuvbqrzrfqtghjragidlyiwil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324561.4576964-272-238212314980750/AnsiballZ_stat.py'
Oct 01 13:16:01 compute-0 sudo[126910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:02 compute-0 python3.9[126912]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:16:02 compute-0 sudo[126910]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:02 compute-0 sudo[126988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwayzceyshdticfesiwtyacminnndgar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324561.4576964-272-238212314980750/AnsiballZ_file.py'
Oct 01 13:16:02 compute-0 sudo[126988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:02 compute-0 python3.9[126990]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:02 compute-0 sudo[126988]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:03 compute-0 ceph-mon[74802]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:03 compute-0 sudo[127140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmcgyksgifcfvbapmbztvljsftosntxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324562.8297608-285-229300026157769/AnsiballZ_command.py'
Oct 01 13:16:03 compute-0 sudo[127140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:03 compute-0 python3.9[127142]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:16:03 compute-0 sudo[127140]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:04 compute-0 sudo[127293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmswavefztjlzxfroakbhbkuwhjlnges ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759324563.82187-293-15691091303333/AnsiballZ_edpm_nftables_from_files.py'
Oct 01 13:16:04 compute-0 sudo[127293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:04 compute-0 python3[127295]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 01 13:16:04 compute-0 sudo[127293]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:04 compute-0 sudo[127445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dltksggullczafehgeetildemyatdpuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324564.6471076-301-126003217423808/AnsiballZ_stat.py'
Oct 01 13:16:04 compute-0 sudo[127445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:05 compute-0 python3.9[127447]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:16:05 compute-0 sudo[127445]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:05 compute-0 sudo[127523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyegbcutuipttcaewtfwelqqzylqwlbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324564.6471076-301-126003217423808/AnsiballZ_file.py'
Oct 01 13:16:05 compute-0 sudo[127523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:05 compute-0 ceph-mon[74802]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:05 compute-0 python3.9[127525]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:05 compute-0 sudo[127523]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:06 compute-0 sudo[127677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krmwtsxnlzyptwdvsezvfzjbwlywwyqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324565.7691221-313-69270385701494/AnsiballZ_stat.py'
Oct 01 13:16:06 compute-0 sudo[127677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:06 compute-0 sshd-session[127550]: Received disconnect from 200.7.101.139 port 39836:11: Bye Bye [preauth]
Oct 01 13:16:06 compute-0 sshd-session[127550]: Disconnected from authenticating user root 200.7.101.139 port 39836 [preauth]
Oct 01 13:16:06 compute-0 python3.9[127679]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:16:06 compute-0 sudo[127677]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:06 compute-0 sudo[127755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjoqjqrknipkzmmukppzwbwkjgolxznc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324565.7691221-313-69270385701494/AnsiballZ_file.py'
Oct 01 13:16:06 compute-0 sudo[127755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:06 compute-0 python3.9[127757]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:06 compute-0 sudo[127755]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:07 compute-0 sudo[127907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcvhdbtsdhnatqukowbwkcdjyalwmzvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324567.0824757-325-220507687490563/AnsiballZ_stat.py'
Oct 01 13:16:07 compute-0 sudo[127907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:07 compute-0 ceph-mon[74802]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:07 compute-0 python3.9[127909]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:16:07 compute-0 sudo[127907]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:07 compute-0 sudo[127985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnttafmhurrvuzzycjtcgqohyqxbctug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324567.0824757-325-220507687490563/AnsiballZ_file.py'
Oct 01 13:16:07 compute-0 sudo[127985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:08 compute-0 python3.9[127987]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:08 compute-0 sudo[127985]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:08 compute-0 sudo[128137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhhquqhkqiychojxgdtaihejnvkpuprs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324568.3223627-337-137321868712424/AnsiballZ_stat.py'
Oct 01 13:16:08 compute-0 sudo[128137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:08 compute-0 python3.9[128139]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:16:08 compute-0 sudo[128137]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:09 compute-0 sudo[128215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zatcevlencqnyblauvbckzjqrrvlmmkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324568.3223627-337-137321868712424/AnsiballZ_file.py'
Oct 01 13:16:09 compute-0 sudo[128215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:09 compute-0 python3.9[128217]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:09 compute-0 sudo[128215]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:09 compute-0 ceph-mon[74802]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:10 compute-0 sudo[128367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihfxmfipswqbcbsvgpmxtuuazfrvhqzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324569.691526-349-224943091270635/AnsiballZ_stat.py'
Oct 01 13:16:10 compute-0 sudo[128367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:10 compute-0 python3.9[128369]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:16:10 compute-0 sudo[128367]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:10 compute-0 sudo[128445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cttdtaxdyhxuavskziunbhiveaaimzup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324569.691526-349-224943091270635/AnsiballZ_file.py'
Oct 01 13:16:10 compute-0 sudo[128445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:10 compute-0 python3.9[128447]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:10 compute-0 sudo[128445]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:11 compute-0 sudo[128597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzhwpmpoxbvrlrepilkndsvsocdnhckb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324571.1857848-362-198731574503350/AnsiballZ_command.py'
Oct 01 13:16:11 compute-0 sudo[128597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:11 compute-0 ceph-mon[74802]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:11 compute-0 python3.9[128599]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:16:11 compute-0 sudo[128597]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:12 compute-0 sudo[128752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpdzecljejjppnnbhwxcymckuezcyvyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324571.904817-370-58202867459557/AnsiballZ_blockinfile.py'
Oct 01 13:16:12 compute-0 sudo[128752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:12 compute-0 python3.9[128754]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:12 compute-0 sudo[128752]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:13 compute-0 sudo[128904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwebkxnxmreattkujojdcgjriamlaio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324572.8052077-379-140326597201908/AnsiballZ_file.py'
Oct 01 13:16:13 compute-0 sudo[128904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:13 compute-0 python3.9[128906]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:13 compute-0 sudo[128904]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:13 compute-0 ceph-mon[74802]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:13 compute-0 sudo[129056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pagqzmxeoghzqlokrlczcaqsrgfqjvlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324573.468677-379-117108487716814/AnsiballZ_file.py'
Oct 01 13:16:13 compute-0 sudo[129056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:14 compute-0 python3.9[129058]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:14 compute-0 sudo[129056]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:14 compute-0 sshd-session[129059]: Invalid user user1 from 156.236.31.46 port 44252
Oct 01 13:16:14 compute-0 sshd-session[129059]: Received disconnect from 156.236.31.46 port 44252:11: Bye Bye [preauth]
Oct 01 13:16:14 compute-0 sshd-session[129059]: Disconnected from invalid user user1 156.236.31.46 port 44252 [preauth]
Oct 01 13:16:14 compute-0 sudo[129210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxmznuaxaamkfalcvpzesyzbrakemcbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324574.225127-394-169897167033174/AnsiballZ_mount.py'
Oct 01 13:16:14 compute-0 sudo[129210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:14 compute-0 python3.9[129212]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 01 13:16:14 compute-0 sudo[129210]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:15 compute-0 sudo[129362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfjcnnzdlkhjhlbojckcxvkwfdfqefmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324575.1228757-394-85569896004425/AnsiballZ_mount.py'
Oct 01 13:16:15 compute-0 sudo[129362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:15 compute-0 ceph-mon[74802]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:15 compute-0 python3.9[129364]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 01 13:16:15 compute-0 sudo[129362]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:16 compute-0 sshd-session[121360]: Connection closed by 192.168.122.30 port 46642
Oct 01 13:16:16 compute-0 sshd-session[121357]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:16:16 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Oct 01 13:16:16 compute-0 systemd[1]: session-40.scope: Consumed 32.686s CPU time.
Oct 01 13:16:16 compute-0 systemd-logind[818]: Session 40 logged out. Waiting for processes to exit.
Oct 01 13:16:16 compute-0 systemd-logind[818]: Removed session 40.
Oct 01 13:16:17 compute-0 ceph-mon[74802]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:16:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:19 compute-0 ceph-mon[74802]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:21 compute-0 sshd-session[129390]: Accepted publickey for zuul from 192.168.122.30 port 59774 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:16:21 compute-0 systemd-logind[818]: New session 41 of user zuul.
Oct 01 13:16:21 compute-0 systemd[1]: Started Session 41 of User zuul.
Oct 01 13:16:21 compute-0 sshd-session[129390]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:16:21 compute-0 ceph-mon[74802]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:22 compute-0 sudo[129543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gojvogpnicagndhpvgqrrlatbsbwxjxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324581.4884624-16-217887438420072/AnsiballZ_tempfile.py'
Oct 01 13:16:22 compute-0 sudo[129543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:22 compute-0 python3.9[129545]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 01 13:16:22 compute-0 sudo[129543]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:23 compute-0 sudo[129695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbmwhujbmgzbgwbpyyurwyjjknguzpan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324582.4973755-28-45151950229971/AnsiballZ_stat.py'
Oct 01 13:16:23 compute-0 sudo[129695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:23 compute-0 python3.9[129697]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:16:23 compute-0 sudo[129695]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:23 compute-0 ceph-mon[74802]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:24 compute-0 sudo[129849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kafkhsaynywstttvldexrfddslqhiyjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324583.5134013-36-156731705270858/AnsiballZ_slurp.py'
Oct 01 13:16:24 compute-0 sudo[129849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:24 compute-0 python3.9[129851]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 01 13:16:24 compute-0 sudo[129849]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:24 compute-0 sshd-session[129870]: banner exchange: Connection from 195.178.110.109 port 59004: invalid format
Oct 01 13:16:24 compute-0 sshd-session[129900]: banner exchange: Connection from 195.178.110.109 port 59008: invalid format
Oct 01 13:16:24 compute-0 sudo[130003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yppfufkirlafzdbmbdffvzybgkviwclr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324584.4916277-44-227339708361216/AnsiballZ_stat.py'
Oct 01 13:16:24 compute-0 sudo[130003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:25 compute-0 python3.9[130005]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.g7ci6xsn follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:16:25 compute-0 sudo[130003]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:25 compute-0 ceph-mon[74802]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:25 compute-0 sudo[130128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgzsuuagbgwztbapaqdwihqhbjxwvwdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324584.4916277-44-227339708361216/AnsiballZ_copy.py'
Oct 01 13:16:25 compute-0 sudo[130128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:25 compute-0 python3.9[130130]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.g7ci6xsn mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324584.4916277-44-227339708361216/.source.g7ci6xsn _original_basename=.8_w8m315 follow=False checksum=4cbd468ec54f05af8d39c16a8e0b3b79c637512f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:25 compute-0 sudo[130128]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:26 compute-0 sudo[130280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyohgdvzptlpculauwtjyzyytfijecni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324586.1540942-59-40892933776311/AnsiballZ_setup.py'
Oct 01 13:16:26 compute-0 sudo[130280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:27 compute-0 python3.9[130282]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:16:27 compute-0 sudo[130280]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:27 compute-0 ceph-mon[74802]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:27 compute-0 sudo[130432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xroafqjbtpibqaozujzzytkibikzjyzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324587.3928246-68-18990180233351/AnsiballZ_blockinfile.py'
Oct 01 13:16:27 compute-0 sudo[130432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:27 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 01 13:16:28 compute-0 python3.9[130434]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQuc3bhfyzL595OFOLV247IpwwrNv1jbuEyuIMlhGVL9o/JSyWTFuOVfeOlp2bgaV1HmT029a0g6F2wKmJyCLyTmUlSHjvFu+5OYahUrcWRA5wdTNonHdPtV7OxmGUyid1pIpbNVNRW3jpvnxoiRnI9We0KEWETWj0KsbyuQEnHthqnNEbvu9ZDWHKO3WwnNiEt4TvlIrnPpVac+Q9mG4Iqcsl1qDYx9ZKPuVLtYXvEtxENwTCfYUN7Nt9v/5SUlGTGxFlLR/tBKFw98HNvii7zAkpst6QHrOpcFmWYO6LMkxVjz0aIZvNUsbfKtfnSgjUBuC6Oy/QuzhKisWbFqPENpGofP9VCenS2zfCHewrnjhYCM6/NX7PzTVH0vkxCO2C5+xXm6HIvDZPnYfSL50+z5xfZXpuB7I8mKze82lkWdpFMkvmglXmjoEQgmrbl5kPRhq0yteRkbyyR6B/0X02dml1bPXU3azBrbTQNImgJeKRX8yZGL3Bbsfl5VMT+r8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGgRSLYQNGHBrZk4XBkcn+kfWXhVXnPjRWsejgHIwyOG
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMQp4ff+5X+OCwYApPStN8XgACWS/2O/jZ6Xj4flPyrz/owAZoGD9kAYm/48KAYQYbXLvyoq8TZyZOgBYKe6Lcs=
                                              create=True mode=0644 path=/tmp/ansible.g7ci6xsn state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:28 compute-0 sudo[130432]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:28 compute-0 sudo[130586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpaosibgeoaekzqmrovwqusiwvgvvjmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324588.3061197-76-271239095788380/AnsiballZ_command.py'
Oct 01 13:16:28 compute-0 sudo[130586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:29 compute-0 python3.9[130588]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.g7ci6xsn' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:16:29 compute-0 sudo[130586]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:29 compute-0 ceph-mon[74802]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:29 compute-0 sudo[130740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtonecitbbcmuuffeeuaniwvjytaxgfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324589.2802482-84-81251151315937/AnsiballZ_file.py'
Oct 01 13:16:29 compute-0 sudo[130740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:30 compute-0 python3.9[130742]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.g7ci6xsn state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:30 compute-0 sudo[130740]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:30 compute-0 sshd-session[129393]: Connection closed by 192.168.122.30 port 59774
Oct 01 13:16:30 compute-0 sshd-session[129390]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:16:30 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Oct 01 13:16:30 compute-0 systemd[1]: session-41.scope: Consumed 6.245s CPU time.
Oct 01 13:16:30 compute-0 systemd-logind[818]: Session 41 logged out. Waiting for processes to exit.
Oct 01 13:16:30 compute-0 systemd-logind[818]: Removed session 41.
Oct 01 13:16:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:30 compute-0 ceph-mon[74802]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:33 compute-0 ceph-mon[74802]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:34 compute-0 sshd-session[130767]: Invalid user seekcy from 27.254.137.144 port 41060
Oct 01 13:16:34 compute-0 sshd-session[130767]: Received disconnect from 27.254.137.144 port 41060:11: Bye Bye [preauth]
Oct 01 13:16:34 compute-0 sshd-session[130767]: Disconnected from invalid user seekcy 27.254.137.144 port 41060 [preauth]
Oct 01 13:16:35 compute-0 ceph-mon[74802]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:35 compute-0 sshd-session[130770]: Accepted publickey for zuul from 192.168.122.30 port 58200 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:16:35 compute-0 systemd-logind[818]: New session 42 of user zuul.
Oct 01 13:16:35 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct 01 13:16:35 compute-0 sshd-session[130770]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:16:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:36 compute-0 python3.9[130923]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:16:37 compute-0 sudo[131077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vccfhkcqgruzdpjsanyialuquqsfrkhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324597.017136-32-204928931112162/AnsiballZ_systemd.py'
Oct 01 13:16:37 compute-0 sudo[131077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:37 compute-0 ceph-mon[74802]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:38 compute-0 python3.9[131079]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 01 13:16:39 compute-0 sudo[131077]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:39 compute-0 ceph-mon[74802]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:39 compute-0 sudo[131231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aveeapuuusyylkmktucdmvowtlfvxuud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324599.248918-40-7098828020439/AnsiballZ_systemd.py'
Oct 01 13:16:39 compute-0 sudo[131231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:39 compute-0 python3.9[131233]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:16:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:39 compute-0 sudo[131231]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:40 compute-0 sudo[131384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmbxbiffxlmjqdtfktxwfczcmrqlsnea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324600.217635-49-152676367537834/AnsiballZ_command.py'
Oct 01 13:16:40 compute-0 sudo[131384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:40 compute-0 python3.9[131386]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:16:40 compute-0 sudo[131384]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:41 compute-0 sudo[131537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upigmauoqnnnwftbiwdqufhtqlcwrory ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324601.0933914-57-18551335441129/AnsiballZ_stat.py'
Oct 01 13:16:41 compute-0 sudo[131537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:41 compute-0 python3.9[131539]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:16:41 compute-0 sudo[131537]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:41 compute-0 ceph-mon[74802]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:42 compute-0 sudo[131689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atdclrlqxhgvesaeachmpktkrqrdcbsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324602.0108986-66-162133611812692/AnsiballZ_file.py'
Oct 01 13:16:42 compute-0 sudo[131689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:42 compute-0 python3.9[131691]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:16:42 compute-0 sudo[131689]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:43 compute-0 sshd-session[130773]: Connection closed by 192.168.122.30 port 58200
Oct 01 13:16:43 compute-0 sshd-session[130770]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:16:43 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct 01 13:16:43 compute-0 systemd[1]: session-42.scope: Consumed 4.207s CPU time.
Oct 01 13:16:43 compute-0 systemd-logind[818]: Session 42 logged out. Waiting for processes to exit.
Oct 01 13:16:43 compute-0 systemd-logind[818]: Removed session 42.
Oct 01 13:16:43 compute-0 ceph-mon[74802]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:43 compute-0 sshd-session[131716]: Invalid user devtest from 80.253.31.232 port 44182
Oct 01 13:16:44 compute-0 sshd-session[131716]: Received disconnect from 80.253.31.232 port 44182:11: Bye Bye [preauth]
Oct 01 13:16:44 compute-0 sshd-session[131716]: Disconnected from invalid user devtest 80.253.31.232 port 44182 [preauth]
Oct 01 13:16:45 compute-0 ceph-mon[74802]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:46 compute-0 sudo[131718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:16:46 compute-0 sudo[131718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:46 compute-0 sudo[131718]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:46 compute-0 sudo[131743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:16:46 compute-0 sudo[131743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:46 compute-0 sudo[131743]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:47 compute-0 sudo[131768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:16:47 compute-0 sudo[131768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:47 compute-0 sudo[131768]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:47 compute-0 sudo[131793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:16:47 compute-0 sudo[131793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:47 compute-0 ceph-mon[74802]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:47 compute-0 sudo[131793]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:16:47
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'images', '.mgr', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups']
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:16:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:16:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:16:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:16:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:16:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:16:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:16:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 4add7e12-f0a8-4e9b-a2bf-c6b545e1a6bb does not exist
Oct 01 13:16:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a167fa7d-9350-4ce4-9661-a3d19f97a952 does not exist
Oct 01 13:16:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 07c6cde4-e5a8-41ee-b5e9-a74cb50d6b3e does not exist
Oct 01 13:16:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:16:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:16:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:16:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:16:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:16:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:16:48 compute-0 sudo[131849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:16:48 compute-0 sudo[131849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:48 compute-0 sudo[131849]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:48 compute-0 sudo[131874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:16:48 compute-0 sudo[131874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:48 compute-0 sudo[131874]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:48 compute-0 sudo[131899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:16:48 compute-0 sudo[131899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:48 compute-0 sudo[131899]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:48 compute-0 sudo[131924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:16:48 compute-0 sudo[131924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:48 compute-0 sshd-session[131968]: Accepted publickey for zuul from 192.168.122.30 port 53482 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:16:48 compute-0 systemd-logind[818]: New session 43 of user zuul.
Oct 01 13:16:48 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct 01 13:16:48 compute-0 sshd-session[131968]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:16:48 compute-0 podman[131990]: 2025-10-01 13:16:48.695116415 +0000 UTC m=+0.029153010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:16:49 compute-0 podman[131990]: 2025-10-01 13:16:49.038613127 +0000 UTC m=+0.372649682 container create 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:16:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:16:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:16:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:16:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:16:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:16:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:16:49 compute-0 systemd[1]: Started libpod-conmon-5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479.scope.
Oct 01 13:16:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:16:49 compute-0 podman[131990]: 2025-10-01 13:16:49.204910187 +0000 UTC m=+0.538946772 container init 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:16:49 compute-0 podman[131990]: 2025-10-01 13:16:49.215100084 +0000 UTC m=+0.549136629 container start 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:16:49 compute-0 admiring_benz[132059]: 167 167
Oct 01 13:16:49 compute-0 systemd[1]: libpod-5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479.scope: Deactivated successfully.
Oct 01 13:16:49 compute-0 podman[131990]: 2025-10-01 13:16:49.245396783 +0000 UTC m=+0.579433308 container attach 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:16:49 compute-0 podman[131990]: 2025-10-01 13:16:49.246632166 +0000 UTC m=+0.580668701 container died 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-aec32489dbe62dd4ee16e10aed14ca9c6ad851e6cabcac2b1bac44b0db6488a0-merged.mount: Deactivated successfully.
Oct 01 13:16:49 compute-0 podman[131990]: 2025-10-01 13:16:49.474049602 +0000 UTC m=+0.808086107 container remove 5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:16:49 compute-0 systemd[1]: libpod-conmon-5b2714e329f2b375c481ef674421669bdf97e893d5c480b35d97baef9da94479.scope: Deactivated successfully.
Oct 01 13:16:49 compute-0 podman[132182]: 2025-10-01 13:16:49.769984004 +0000 UTC m=+0.108048388 container create 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 01 13:16:49 compute-0 podman[132182]: 2025-10-01 13:16:49.709065684 +0000 UTC m=+0.047130088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:16:49 compute-0 systemd[1]: Started libpod-conmon-696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d.scope.
Oct 01 13:16:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:49 compute-0 python3.9[132176]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:16:49 compute-0 podman[132182]: 2025-10-01 13:16:49.93387521 +0000 UTC m=+0.271939624 container init 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:16:49 compute-0 podman[132182]: 2025-10-01 13:16:49.946253053 +0000 UTC m=+0.284317457 container start 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:16:49 compute-0 podman[132182]: 2025-10-01 13:16:49.971582558 +0000 UTC m=+0.309646992 container attach 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:16:50 compute-0 ceph-mon[74802]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:50 compute-0 sudo[132369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtminwofmqnmtuzwadqvscpfnzzkhjza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324610.383064-34-26594933174836/AnsiballZ_setup.py'
Oct 01 13:16:50 compute-0 sudo[132369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:50 compute-0 pensive_beaver[132199]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:16:50 compute-0 pensive_beaver[132199]: --> relative data size: 1.0
Oct 01 13:16:50 compute-0 pensive_beaver[132199]: --> All data devices are unavailable
Oct 01 13:16:50 compute-0 systemd[1]: libpod-696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d.scope: Deactivated successfully.
Oct 01 13:16:50 compute-0 podman[132182]: 2025-10-01 13:16:50.941920464 +0000 UTC m=+1.279984848 container died 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 13:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2284e4528ab7ea536c09db61b743c54f40b3e799d6829408e155318e0686935a-merged.mount: Deactivated successfully.
Oct 01 13:16:51 compute-0 python3.9[132373]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:16:51 compute-0 ceph-mon[74802]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:51 compute-0 podman[132182]: 2025-10-01 13:16:51.153183816 +0000 UTC m=+1.491248200 container remove 696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 01 13:16:51 compute-0 systemd[1]: libpod-conmon-696fd7e57cdfdb118c6db9ab6583005dce6bf4db23e89ad0349f94fd4cf4be6d.scope: Deactivated successfully.
Oct 01 13:16:51 compute-0 sudo[131924]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:51 compute-0 sudo[132402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:16:51 compute-0 sudo[132402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:51 compute-0 sudo[132402]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:51 compute-0 sudo[132369]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:51 compute-0 sudo[132429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:16:51 compute-0 sudo[132429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:51 compute-0 sudo[132429]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:51 compute-0 sudo[132455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:16:51 compute-0 sudo[132455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:51 compute-0 sudo[132455]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:51 compute-0 sudo[132480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:16:51 compute-0 sudo[132480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:51 compute-0 sshd-session[71050]: Received disconnect from 38.102.83.150 port 43256:11: disconnected by user
Oct 01 13:16:51 compute-0 sshd-session[71050]: Disconnected from user zuul 38.102.83.150 port 43256
Oct 01 13:16:51 compute-0 sshd-session[71047]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:16:51 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 01 13:16:51 compute-0 systemd[1]: session-18.scope: Consumed 1min 26.263s CPU time.
Oct 01 13:16:51 compute-0 systemd-logind[818]: Session 18 logged out. Waiting for processes to exit.
Oct 01 13:16:51 compute-0 systemd-logind[818]: Removed session 18.
Oct 01 13:16:51 compute-0 sudo[132610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlhrrgukbcteidosbgohdyjilqiskgqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324610.383064-34-26594933174836/AnsiballZ_dnf.py'
Oct 01 13:16:51 compute-0 sudo[132610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:16:51 compute-0 podman[132620]: 2025-10-01 13:16:51.795336114 +0000 UTC m=+0.059805700 container create 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:16:51 compute-0 systemd[1]: Started libpod-conmon-3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72.scope.
Oct 01 13:16:51 compute-0 podman[132620]: 2025-10-01 13:16:51.760365473 +0000 UTC m=+0.024835079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:16:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:16:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:51 compute-0 podman[132620]: 2025-10-01 13:16:51.920106705 +0000 UTC m=+0.184576321 container init 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:16:51 compute-0 podman[132620]: 2025-10-01 13:16:51.92798968 +0000 UTC m=+0.192459266 container start 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:16:51 compute-0 objective_babbage[132636]: 167 167
Oct 01 13:16:51 compute-0 systemd[1]: libpod-3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72.scope: Deactivated successfully.
Oct 01 13:16:51 compute-0 podman[132620]: 2025-10-01 13:16:51.962940281 +0000 UTC m=+0.227409907 container attach 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:16:51 compute-0 podman[132620]: 2025-10-01 13:16:51.964543267 +0000 UTC m=+0.229012893 container died 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:16:51 compute-0 python3.9[132619]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 01 13:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b134e942197b56abf6cc07ec97a2d790aeaeeca3ed5ab3f7fa2754eaa8d5b9b7-merged.mount: Deactivated successfully.
Oct 01 13:16:52 compute-0 podman[132620]: 2025-10-01 13:16:52.20045127 +0000 UTC m=+0.464920886 container remove 3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:16:52 compute-0 systemd[1]: libpod-conmon-3bd42d0a5aebee315cfd3d719ea295ac476b417a39dfbbdf269a6d1388f2ed72.scope: Deactivated successfully.
Oct 01 13:16:52 compute-0 podman[132661]: 2025-10-01 13:16:52.437061318 +0000 UTC m=+0.089155606 container create 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:16:52 compute-0 podman[132661]: 2025-10-01 13:16:52.39188575 +0000 UTC m=+0.043980068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:16:52 compute-0 systemd[1]: Started libpod-conmon-2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d.scope.
Oct 01 13:16:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:52 compute-0 podman[132661]: 2025-10-01 13:16:52.601077969 +0000 UTC m=+0.253172257 container init 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:16:52 compute-0 podman[132661]: 2025-10-01 13:16:52.613773063 +0000 UTC m=+0.265867341 container start 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:16:52 compute-0 podman[132661]: 2025-10-01 13:16:52.653809912 +0000 UTC m=+0.305904220 container attach 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:16:53 compute-0 sudo[132610]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:53 compute-0 ceph-mon[74802]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:53 compute-0 infallible_yonath[132677]: {
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:     "0": [
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:         {
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "devices": [
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "/dev/loop3"
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             ],
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_name": "ceph_lv0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_size": "21470642176",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "name": "ceph_lv0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "tags": {
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.cluster_name": "ceph",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.crush_device_class": "",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.encrypted": "0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.osd_id": "0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.type": "block",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.vdo": "0"
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             },
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "type": "block",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "vg_name": "ceph_vg0"
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:         }
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:     ],
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:     "1": [
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:         {
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "devices": [
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "/dev/loop4"
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             ],
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_name": "ceph_lv1",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_size": "21470642176",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "name": "ceph_lv1",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "tags": {
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.cluster_name": "ceph",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.crush_device_class": "",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.encrypted": "0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.osd_id": "1",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.type": "block",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.vdo": "0"
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             },
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "type": "block",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "vg_name": "ceph_vg1"
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:         }
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:     ],
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:     "2": [
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:         {
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "devices": [
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "/dev/loop5"
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             ],
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_name": "ceph_lv2",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_size": "21470642176",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "name": "ceph_lv2",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "tags": {
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.cluster_name": "ceph",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.crush_device_class": "",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.encrypted": "0",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.osd_id": "2",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.type": "block",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:                 "ceph.vdo": "0"
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             },
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "type": "block",
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:             "vg_name": "ceph_vg2"
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:         }
Oct 01 13:16:53 compute-0 infallible_yonath[132677]:     ]
Oct 01 13:16:53 compute-0 infallible_yonath[132677]: }
Oct 01 13:16:53 compute-0 systemd[1]: libpod-2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d.scope: Deactivated successfully.
Oct 01 13:16:53 compute-0 podman[132661]: 2025-10-01 13:16:53.354305689 +0000 UTC m=+1.006399977 container died 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a024fe2757e6628c310d10a4f01cbe7c00ebf9344489c1c1952a43a15fe912ee-merged.mount: Deactivated successfully.
Oct 01 13:16:53 compute-0 podman[132661]: 2025-10-01 13:16:53.623521776 +0000 UTC m=+1.275616054 container remove 2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:16:53 compute-0 systemd[1]: libpod-conmon-2a6a54808efa6eac36966617dcab589a9a8128aeeafcd4e501db2f3f2e751a1d.scope: Deactivated successfully.
Oct 01 13:16:53 compute-0 sudo[132480]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:53 compute-0 sudo[132800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:16:53 compute-0 sudo[132800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:53 compute-0 sudo[132800]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:53 compute-0 sudo[132833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:16:53 compute-0 sudo[132833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:53 compute-0 sudo[132833]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:53 compute-0 sudo[132880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:16:53 compute-0 sudo[132880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:53 compute-0 sudo[132880]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:53 compute-0 sudo[132926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:16:53 compute-0 sudo[132926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:54 compute-0 python3.9[132923]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:16:54 compute-0 podman[132993]: 2025-10-01 13:16:54.405407148 +0000 UTC m=+0.070782365 container create 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:16:54 compute-0 podman[132993]: 2025-10-01 13:16:54.355522895 +0000 UTC m=+0.020898132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:16:54 compute-0 systemd[1]: Started libpod-conmon-98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209.scope.
Oct 01 13:16:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:16:54 compute-0 podman[132993]: 2025-10-01 13:16:54.581684738 +0000 UTC m=+0.247060045 container init 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:16:54 compute-0 podman[132993]: 2025-10-01 13:16:54.589541742 +0000 UTC m=+0.254916929 container start 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:16:54 compute-0 modest_moser[133009]: 167 167
Oct 01 13:16:54 compute-0 systemd[1]: libpod-98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209.scope: Deactivated successfully.
Oct 01 13:16:54 compute-0 podman[132993]: 2025-10-01 13:16:54.627629362 +0000 UTC m=+0.293004649 container attach 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:16:54 compute-0 podman[132993]: 2025-10-01 13:16:54.630007246 +0000 UTC m=+0.295382463 container died 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:16:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-886b0a14d1afc40cb8fa2e7e05e5f859a761387bc823ae29d2aec03caff53c96-merged.mount: Deactivated successfully.
Oct 01 13:16:54 compute-0 podman[132993]: 2025-10-01 13:16:54.812872806 +0000 UTC m=+0.478248023 container remove 98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:16:54 compute-0 systemd[1]: libpod-conmon-98279e056d2157714c95baaf9241d917cab1e12b3c7178d8c2a513f932513209.scope: Deactivated successfully.
Oct 01 13:16:55 compute-0 podman[133108]: 2025-10-01 13:16:55.072391815 +0000 UTC m=+0.080567558 container create 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 13:16:55 compute-0 podman[133108]: 2025-10-01 13:16:55.023850748 +0000 UTC m=+0.032026551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:16:55 compute-0 systemd[1]: Started libpod-conmon-3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d.scope.
Oct 01 13:16:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:16:55 compute-0 podman[133108]: 2025-10-01 13:16:55.24656599 +0000 UTC m=+0.254741793 container init 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:16:55 compute-0 podman[133108]: 2025-10-01 13:16:55.259102458 +0000 UTC m=+0.267278201 container start 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:16:55 compute-0 podman[133108]: 2025-10-01 13:16:55.326320387 +0000 UTC m=+0.334496130 container attach 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:16:55 compute-0 ceph-mon[74802]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:55 compute-0 python3.9[133203]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 13:16:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:16:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]: {
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "osd_id": 0,
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "type": "bluestore"
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:     },
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "osd_id": 2,
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "type": "bluestore"
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:     },
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "osd_id": 1,
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:         "type": "bluestore"
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]:     }
Oct 01 13:16:56 compute-0 intelligent_agnesi[133137]: }
Oct 01 13:16:56 compute-0 systemd[1]: libpod-3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d.scope: Deactivated successfully.
Oct 01 13:16:56 compute-0 podman[133108]: 2025-10-01 13:16:56.311701899 +0000 UTC m=+1.319877642 container died 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 13:16:56 compute-0 systemd[1]: libpod-3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d.scope: Consumed 1.056s CPU time.
Oct 01 13:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-006821c121138a927416f0096ff5694f43dbc8c7decb1544bab76a91e36ef9aa-merged.mount: Deactivated successfully.
Oct 01 13:16:56 compute-0 podman[133108]: 2025-10-01 13:16:56.395925232 +0000 UTC m=+1.404100955 container remove 3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_agnesi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:16:56 compute-0 systemd[1]: libpod-conmon-3a97e6dd16413e7402e4861607421b073cd385488f6146dfdbcd5fd6f3ea123d.scope: Deactivated successfully.
Oct 01 13:16:56 compute-0 sudo[132926]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:16:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:16:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:16:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 70fc0b91-1c27-479e-b17c-d914cdd029b6 does not exist
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev aaa8d604-9c05-42ac-ae50-d9e6ac5344a5 does not exist
Oct 01 13:16:56 compute-0 python3.9[133379]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:16:56 compute-0 sudo[133396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:16:56 compute-0 sudo[133396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:56 compute-0 sudo[133396]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:56 compute-0 sudo[133442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:16:56 compute-0 sudo[133442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:16:56 compute-0 sudo[133442]: pam_unix(sudo:session): session closed for user root
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:16:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:16:57 compute-0 python3.9[133595]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:16:57 compute-0 ceph-mon[74802]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:16:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:16:57 compute-0 sshd-session[131997]: Connection closed by 192.168.122.30 port 53482
Oct 01 13:16:57 compute-0 sshd-session[131968]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:16:57 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct 01 13:16:57 compute-0 systemd[1]: session-43.scope: Consumed 6.259s CPU time.
Oct 01 13:16:57 compute-0 systemd-logind[818]: Session 43 logged out. Waiting for processes to exit.
Oct 01 13:16:57 compute-0 systemd-logind[818]: Removed session 43.
Oct 01 13:16:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:59 compute-0 ceph-mon[74802]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:16:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:01 compute-0 ceph-mon[74802]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:02 compute-0 sshd-session[133621]: Accepted publickey for zuul from 192.168.122.30 port 60984 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:17:02 compute-0 systemd-logind[818]: New session 44 of user zuul.
Oct 01 13:17:02 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct 01 13:17:02 compute-0 sshd-session[133621]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:17:03 compute-0 ceph-mon[74802]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:03 compute-0 python3.9[133774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:17:05 compute-0 sudo[133928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfuorlqywxsbpvzqjlakrnbbihnkrluf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324625.0570807-50-203374958535003/AnsiballZ_file.py'
Oct 01 13:17:05 compute-0 sudo[133928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:05 compute-0 ceph-mon[74802]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:05 compute-0 python3.9[133930]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:05 compute-0 sudo[133928]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:06 compute-0 sudo[134080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoqyjochkuonwjbwwobemfvnsczfrkwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324625.831517-50-268997865783870/AnsiballZ_file.py'
Oct 01 13:17:06 compute-0 sudo[134080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:06 compute-0 sshd-session[133620]: Connection closed by 202.103.55.158 port 39644 [preauth]
Oct 01 13:17:06 compute-0 python3.9[134082]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:06 compute-0 sudo[134080]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:06 compute-0 sudo[134233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqgqhntckjnycyzapaqyyzfppgxwrfvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324626.4382172-65-224804694277140/AnsiballZ_stat.py'
Oct 01 13:17:06 compute-0 sudo[134233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:07 compute-0 python3.9[134235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:07 compute-0 sudo[134233]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:07 compute-0 sudo[134356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjenshtezaghslwddxtwprrdstoinklt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324626.4382172-65-224804694277140/AnsiballZ_copy.py'
Oct 01 13:17:07 compute-0 sudo[134356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:07 compute-0 ceph-mon[74802]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:07 compute-0 python3.9[134358]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324626.4382172-65-224804694277140/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b9fa2a794cb9bb11a680f9f94d271635d0bb57f3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:07 compute-0 sudo[134356]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:08 compute-0 sudo[134508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwofwxzjioryoblupqragbkswbrsdgyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324627.8701072-65-64294120392274/AnsiballZ_stat.py'
Oct 01 13:17:08 compute-0 sudo[134508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:08 compute-0 python3.9[134510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:08 compute-0 sudo[134508]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:08 compute-0 sudo[134631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvvdnjuyhxegywmsgkxsckmlqhvpingr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324627.8701072-65-64294120392274/AnsiballZ_copy.py'
Oct 01 13:17:08 compute-0 sudo[134631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:08 compute-0 python3.9[134633]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324627.8701072-65-64294120392274/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ad8d658a88600c09a5e73bc2aedff1b9c3ca8413 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:08 compute-0 sudo[134631]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:09 compute-0 sudo[134783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edacvuzqeqijfpscxujnjwnuzlimfduw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324629.0596507-65-29320201257641/AnsiballZ_stat.py'
Oct 01 13:17:09 compute-0 sudo[134783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:09 compute-0 python3.9[134785]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:09 compute-0 sudo[134783]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:09 compute-0 ceph-mon[74802]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:09 compute-0 sudo[134906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzygonuebkpqiliueuuuyxutpwftoaye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324629.0596507-65-29320201257641/AnsiballZ_copy.py'
Oct 01 13:17:09 compute-0 sudo[134906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:10 compute-0 python3.9[134908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324629.0596507-65-29320201257641/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a9c9520c160593e5fde00102171f94da1aff2a8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:10 compute-0 sudo[134906]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:10 compute-0 sudo[135058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azmsqfnltedpyfbeyqaecaprizptqloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324630.2446666-109-47440819422119/AnsiballZ_file.py'
Oct 01 13:17:10 compute-0 sudo[135058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:10 compute-0 python3.9[135060]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:10 compute-0 sudo[135058]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:11 compute-0 sudo[135210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnrfiinnquvxtcnqnsbdlfxgtkczrldm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324630.9023087-109-69416798415018/AnsiballZ_file.py'
Oct 01 13:17:11 compute-0 sudo[135210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:11 compute-0 python3.9[135212]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:11 compute-0 sudo[135210]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:11 compute-0 ceph-mon[74802]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:11 compute-0 sudo[135362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riagcgbfihcoyarxjahsmsbbkoqbrgdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324631.622709-124-43326251190952/AnsiballZ_stat.py'
Oct 01 13:17:11 compute-0 sudo[135362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:12 compute-0 python3.9[135364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:12 compute-0 sudo[135362]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:12 compute-0 sudo[135485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puffcnxpkkqkcocdkvudmuwewztsqtoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324631.622709-124-43326251190952/AnsiballZ_copy.py'
Oct 01 13:17:12 compute-0 sudo[135485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:12 compute-0 python3.9[135487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324631.622709-124-43326251190952/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b8ff7b3142d7df68d546af11a3e168a78877cc9d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:12 compute-0 sudo[135485]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:13 compute-0 sudo[135637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pghpegotelslnoqamoorvatygjzlezzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324632.8767245-124-195726650198005/AnsiballZ_stat.py'
Oct 01 13:17:13 compute-0 sudo[135637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:13 compute-0 python3.9[135639]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:13 compute-0 sudo[135637]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:13 compute-0 ceph-mon[74802]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:13 compute-0 sudo[135760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqvmirnoygizpjbcckwvkmfcvrxzxatt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324632.8767245-124-195726650198005/AnsiballZ_copy.py'
Oct 01 13:17:13 compute-0 sudo[135760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:14 compute-0 python3.9[135762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324632.8767245-124-195726650198005/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ef3deb6a220b9ac95487eeab2c91b47cd4f38015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:14 compute-0 sudo[135760]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:14 compute-0 sudo[135912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcylqciuespuyhzylykwlnjsolvqguzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324634.2179275-124-1610352530114/AnsiballZ_stat.py'
Oct 01 13:17:14 compute-0 sudo[135912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:14 compute-0 python3.9[135914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:14 compute-0 sudo[135912]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:15 compute-0 sudo[136035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leatjjimjlirlkqvustwqvnaiwomdzsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324634.2179275-124-1610352530114/AnsiballZ_copy.py'
Oct 01 13:17:15 compute-0 sudo[136035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:15 compute-0 python3.9[136037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324634.2179275-124-1610352530114/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2ea0a31ae214ecaf6d723593ef235e827d28ae61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:15 compute-0 sudo[136035]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:15 compute-0 ceph-mon[74802]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:16 compute-0 sudo[136187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pozbzojvsqukfujcnmvszxiickslibwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324635.730093-168-95600942100330/AnsiballZ_file.py'
Oct 01 13:17:16 compute-0 sudo[136187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:16 compute-0 python3.9[136191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:16 compute-0 sudo[136187]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:16 compute-0 sshd-session[136188]: Invalid user walter from 200.7.101.139 port 40350
Oct 01 13:17:16 compute-0 sshd-session[136188]: Received disconnect from 200.7.101.139 port 40350:11: Bye Bye [preauth]
Oct 01 13:17:16 compute-0 sshd-session[136188]: Disconnected from invalid user walter 200.7.101.139 port 40350 [preauth]
Oct 01 13:17:16 compute-0 sudo[136341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zifvnefsgmerrmzvfpmvreecoufezhao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324636.39769-168-17230899702886/AnsiballZ_file.py'
Oct 01 13:17:16 compute-0 sudo[136341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:16 compute-0 python3.9[136343]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:16 compute-0 sudo[136341]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:17 compute-0 sudo[136495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcwpnyoqlyqgyrwmgiwgahwiwjaosvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324637.087863-183-52005592253955/AnsiballZ_stat.py'
Oct 01 13:17:17 compute-0 sudo[136495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:17 compute-0 python3.9[136497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:17 compute-0 sudo[136495]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:17 compute-0 ceph-mon[74802]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:17:17 compute-0 sshd-session[136443]: Invalid user mahima from 156.236.31.46 port 44338
Oct 01 13:17:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:17 compute-0 sshd-session[136443]: Received disconnect from 156.236.31.46 port 44338:11: Bye Bye [preauth]
Oct 01 13:17:17 compute-0 sshd-session[136443]: Disconnected from invalid user mahima 156.236.31.46 port 44338 [preauth]
Oct 01 13:17:18 compute-0 sudo[136618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmasiovytpdqvifirwmalludvjasjbmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324637.087863-183-52005592253955/AnsiballZ_copy.py'
Oct 01 13:17:18 compute-0 sudo[136618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:18 compute-0 python3.9[136620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324637.087863-183-52005592253955/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=80e17c4f9d1023d4423514bc3bb574c53d852795 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:18 compute-0 sudo[136618]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:18 compute-0 sudo[136770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbpowfmmhtihssnleqnlopnrzlfjmtkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324638.6106455-183-187100655429771/AnsiballZ_stat.py'
Oct 01 13:17:18 compute-0 sudo[136770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:19 compute-0 python3.9[136772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:19 compute-0 sudo[136770]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:19 compute-0 sudo[136893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgzmtgyagiademxqmiesejvztlgtmmuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324638.6106455-183-187100655429771/AnsiballZ_copy.py'
Oct 01 13:17:19 compute-0 sudo[136893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:19 compute-0 python3.9[136895]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324638.6106455-183-187100655429771/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ef3deb6a220b9ac95487eeab2c91b47cd4f38015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:19 compute-0 ceph-mon[74802]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:19 compute-0 sudo[136893]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:20 compute-0 sudo[137045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yolpibwrtqoleumdrvkddbcvbnyqiijo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324639.8477578-183-181681424725932/AnsiballZ_stat.py'
Oct 01 13:17:20 compute-0 sudo[137045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:20 compute-0 python3.9[137047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:20 compute-0 sudo[137045]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:20 compute-0 sudo[137168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvqapzeaojikthsmetfejjykjgdrvnxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324639.8477578-183-181681424725932/AnsiballZ_copy.py'
Oct 01 13:17:20 compute-0 sudo[137168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:20 compute-0 python3.9[137170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324639.8477578-183-181681424725932/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=089a67abf9d2f0871caa69cb06eab193e51fbefe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:20 compute-0 sudo[137168]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:21 compute-0 ceph-mon[74802]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:21 compute-0 sudo[137320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utnytpmplfdlapjrkzoumxohdljaqbbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324641.5680177-243-222148845448058/AnsiballZ_file.py'
Oct 01 13:17:21 compute-0 sudo[137320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:22 compute-0 python3.9[137322]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:22 compute-0 sudo[137320]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:22 compute-0 sudo[137472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iofnruwmjgepevsulnwggnztneksmjtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324642.2245195-251-78918481527437/AnsiballZ_stat.py'
Oct 01 13:17:22 compute-0 sudo[137472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:22 compute-0 python3.9[137474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:22 compute-0 sudo[137472]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:22 compute-0 ceph-mon[74802]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:23 compute-0 sudo[137595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqiawizvfiaekktssnpgpbxcacpinhec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324642.2245195-251-78918481527437/AnsiballZ_copy.py'
Oct 01 13:17:23 compute-0 sudo[137595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:23 compute-0 python3.9[137597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324642.2245195-251-78918481527437/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:23 compute-0 sudo[137595]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:23 compute-0 sudo[137747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lshlqtzxoyconchrmauzbrethfaeyjto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324643.6620681-267-98117963703831/AnsiballZ_file.py'
Oct 01 13:17:23 compute-0 sudo[137747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:24 compute-0 python3.9[137749]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:24 compute-0 sudo[137747]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:24 compute-0 sudo[137899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmhkqvoalyhdgaleinznxlvqpzybegkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324644.3884854-275-159752533263008/AnsiballZ_stat.py'
Oct 01 13:17:24 compute-0 sudo[137899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:24 compute-0 ceph-mon[74802]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:24 compute-0 python3.9[137901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:24 compute-0 sudo[137899]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:25 compute-0 sudo[138022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdkvrxteaazympgreilfhmxarnnusoql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324644.3884854-275-159752533263008/AnsiballZ_copy.py'
Oct 01 13:17:25 compute-0 sudo[138022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:25 compute-0 python3.9[138024]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324644.3884854-275-159752533263008/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:25 compute-0 sudo[138022]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:26 compute-0 sudo[138174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsaessthludevxgsfvbwessbnvfbbzsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324645.842003-291-212594421450870/AnsiballZ_file.py'
Oct 01 13:17:26 compute-0 sudo[138174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:26 compute-0 python3.9[138176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:26 compute-0 sudo[138174]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:26 compute-0 sudo[138326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouwrvbnszfiqxataoaoklwotgupdhmqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324646.5371292-299-76069280003779/AnsiballZ_stat.py'
Oct 01 13:17:26 compute-0 sudo[138326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:26 compute-0 ceph-mon[74802]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:27 compute-0 python3.9[138328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:27 compute-0 sudo[138326]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:27 compute-0 sudo[138449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uegrsecobgdenkrkpaytfmlkowzzqcok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324646.5371292-299-76069280003779/AnsiballZ_copy.py'
Oct 01 13:17:27 compute-0 sudo[138449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:27 compute-0 python3.9[138451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324646.5371292-299-76069280003779/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:27 compute-0 sudo[138449]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:28 compute-0 sudo[138601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hldoowsdeykrnoxsilxepiwdapcahabf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324648.0321183-315-247599509389996/AnsiballZ_file.py'
Oct 01 13:17:28 compute-0 sudo[138601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:28 compute-0 python3.9[138603]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:28 compute-0 sudo[138601]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:29 compute-0 sudo[138753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzcscyznbnmhkbpffkvypkbqtpeikttx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324648.8199792-323-94025332381097/AnsiballZ_stat.py'
Oct 01 13:17:29 compute-0 sudo[138753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:29 compute-0 ceph-mon[74802]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:29 compute-0 python3.9[138755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:29 compute-0 sudo[138753]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:29 compute-0 sudo[138876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tasdrntyliuqyvctetpgutsoefqophxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324648.8199792-323-94025332381097/AnsiballZ_copy.py'
Oct 01 13:17:29 compute-0 sudo[138876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:30 compute-0 python3.9[138878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324648.8199792-323-94025332381097/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:30 compute-0 sudo[138876]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:30 compute-0 sudo[139028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtvldgjowqxkmktkkkumtuetrmpggrho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324650.2395184-339-260379322609058/AnsiballZ_file.py'
Oct 01 13:17:30 compute-0 sudo[139028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:30 compute-0 python3.9[139030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:30 compute-0 sudo[139028]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:31 compute-0 ceph-mon[74802]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:31 compute-0 sudo[139180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoagapnwbqcykbptzrbgwjovbpheaqof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324651.0733912-347-198621363234078/AnsiballZ_stat.py'
Oct 01 13:17:31 compute-0 sudo[139180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:31 compute-0 python3.9[139182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:31 compute-0 sudo[139180]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:32 compute-0 sudo[139303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkyahqxzjkvvbapfnyfqdmhoslxjnyhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324651.0733912-347-198621363234078/AnsiballZ_copy.py'
Oct 01 13:17:32 compute-0 sudo[139303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:32 compute-0 python3.9[139305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324651.0733912-347-198621363234078/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:32 compute-0 sudo[139303]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:32 compute-0 sudo[139455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfupsapfmslhfksgezudaxzhtufcxahu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324652.405205-363-169289361028715/AnsiballZ_file.py'
Oct 01 13:17:32 compute-0 sudo[139455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:32 compute-0 python3.9[139457]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:32 compute-0 sudo[139455]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:33 compute-0 ceph-mon[74802]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:33 compute-0 sudo[139607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yauligztfpzdqltnuojvostgsmlwqbjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324653.0915656-371-32166083566747/AnsiballZ_stat.py'
Oct 01 13:17:33 compute-0 sudo[139607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:33 compute-0 python3.9[139609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:33 compute-0 sudo[139607]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:34 compute-0 sudo[139730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtgflmejjtcfylvelpkptrcnwoqgoucl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324653.0915656-371-32166083566747/AnsiballZ_copy.py'
Oct 01 13:17:34 compute-0 sudo[139730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:34 compute-0 python3.9[139732]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324653.0915656-371-32166083566747/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9976f3964d5bacb9b657222aaa8308ffa5d61acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:34 compute-0 sudo[139730]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:34 compute-0 sshd-session[133624]: Connection closed by 192.168.122.30 port 60984
Oct 01 13:17:34 compute-0 sshd-session[133621]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:17:34 compute-0 systemd-logind[818]: Session 44 logged out. Waiting for processes to exit.
Oct 01 13:17:34 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Oct 01 13:17:34 compute-0 systemd[1]: session-44.scope: Consumed 24.436s CPU time.
Oct 01 13:17:34 compute-0 systemd-logind[818]: Removed session 44.
Oct 01 13:17:35 compute-0 ceph-mon[74802]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:37 compute-0 ceph-mon[74802]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:39 compute-0 ceph-mon[74802]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:40 compute-0 sshd-session[139759]: Accepted publickey for zuul from 192.168.122.30 port 51750 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:17:40 compute-0 systemd-logind[818]: New session 45 of user zuul.
Oct 01 13:17:40 compute-0 systemd[1]: Started Session 45 of User zuul.
Oct 01 13:17:40 compute-0 sshd-session[139759]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:17:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.647439) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660647573, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1617, "num_deletes": 252, "total_data_size": 2373344, "memory_usage": 2409592, "flush_reason": "Manual Compaction"}
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660793907, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1379194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7443, "largest_seqno": 9059, "table_properties": {"data_size": 1373902, "index_size": 2368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15135, "raw_average_key_size": 20, "raw_value_size": 1361467, "raw_average_value_size": 1847, "num_data_blocks": 112, "num_entries": 737, "num_filter_entries": 737, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324502, "oldest_key_time": 1759324502, "file_creation_time": 1759324660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 146510 microseconds, and 8124 cpu microseconds.
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.793983) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1379194 bytes OK
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.794018) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.824961) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.824996) EVENT_LOG_v1 {"time_micros": 1759324660824984, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.825032) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2366165, prev total WAL file size 2366165, number of live WAL files 2.
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.826419) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1346KB)], [20(7390KB)]
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660826516, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8947053, "oldest_snapshot_seqno": -1}
Oct 01 13:17:40 compute-0 sudo[139914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnkrtisigbyzjehgjvgkinmbwvpivtuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324660.3651702-22-30625982871785/AnsiballZ_file.py'
Oct 01 13:17:40 compute-0 sudo[139914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3405 keys, 7085856 bytes, temperature: kUnknown
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660952012, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7085856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7059435, "index_size": 16775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81512, "raw_average_key_size": 23, "raw_value_size": 6994311, "raw_average_value_size": 2054, "num_data_blocks": 741, "num_entries": 3405, "num_filter_entries": 3405, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759324660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.952396) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7085856 bytes
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.953804) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.2 rd, 56.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.6) write-amplify(5.1) OK, records in: 3848, records dropped: 443 output_compression: NoCompression
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.953823) EVENT_LOG_v1 {"time_micros": 1759324660953814, "job": 6, "event": "compaction_finished", "compaction_time_micros": 125718, "compaction_time_cpu_micros": 30546, "output_level": 6, "num_output_files": 1, "total_output_size": 7085856, "num_input_records": 3848, "num_output_records": 3405, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660954625, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324660956131, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.826272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:17:40 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:17:40.956380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:17:41 compute-0 python3.9[139916]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:41 compute-0 sudo[139914]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:41 compute-0 ceph-mon[74802]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:41 compute-0 sshd-session[139917]: Invalid user seekcy from 80.253.31.232 port 52884
Oct 01 13:17:41 compute-0 sshd-session[139790]: Invalid user deploy from 27.254.137.144 port 36636
Oct 01 13:17:41 compute-0 sshd-session[139917]: Received disconnect from 80.253.31.232 port 52884:11: Bye Bye [preauth]
Oct 01 13:17:41 compute-0 sshd-session[139917]: Disconnected from invalid user seekcy 80.253.31.232 port 52884 [preauth]
Oct 01 13:17:41 compute-0 sudo[140068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojcojctsikobbucuulwmhgybtmermhnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324661.344504-34-28235488929367/AnsiballZ_stat.py'
Oct 01 13:17:41 compute-0 sudo[140068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:41 compute-0 python3.9[140070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:42 compute-0 sudo[140068]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:42 compute-0 sshd-session[139790]: Received disconnect from 27.254.137.144 port 36636:11: Bye Bye [preauth]
Oct 01 13:17:42 compute-0 sshd-session[139790]: Disconnected from invalid user deploy 27.254.137.144 port 36636 [preauth]
Oct 01 13:17:42 compute-0 sudo[140191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pazmgjosviddnwzouuqdcyhxtsahkbtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324661.344504-34-28235488929367/AnsiballZ_copy.py'
Oct 01 13:17:42 compute-0 sudo[140191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:42 compute-0 python3.9[140193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324661.344504-34-28235488929367/.source.conf _original_basename=ceph.conf follow=False checksum=86adabd2b76c58b2ebe51f5b2fa78db6f8424e89 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:42 compute-0 sudo[140191]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:43 compute-0 sudo[140343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qenymqagkifvxcladxwtznponutuobzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324662.908848-34-171146042036872/AnsiballZ_stat.py'
Oct 01 13:17:43 compute-0 sudo[140343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:43 compute-0 python3.9[140345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:17:43 compute-0 sudo[140343]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:43 compute-0 ceph-mon[74802]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:43 compute-0 sudo[140466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyarhblndkqkzmahyerbmqoiuectnmna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324662.908848-34-171146042036872/AnsiballZ_copy.py'
Oct 01 13:17:43 compute-0 sudo[140466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:43 compute-0 python3.9[140468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324662.908848-34-171146042036872/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=cb7a726d0a2db4bead6fc30d6d9fab3edee0b4fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:17:43 compute-0 sudo[140466]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:44 compute-0 sshd-session[139762]: Connection closed by 192.168.122.30 port 51750
Oct 01 13:17:44 compute-0 sshd-session[139759]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:17:44 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Oct 01 13:17:44 compute-0 systemd[1]: session-45.scope: Consumed 2.897s CPU time.
Oct 01 13:17:44 compute-0 systemd-logind[818]: Session 45 logged out. Waiting for processes to exit.
Oct 01 13:17:44 compute-0 systemd-logind[818]: Removed session 45.
Oct 01 13:17:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:45 compute-0 ceph-mon[74802]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:46 compute-0 sshd-session[139758]: error: kex_exchange_identification: read: Connection timed out
Oct 01 13:17:46 compute-0 sshd-session[139758]: banner exchange: Connection from 202.103.55.158 port 46606: Connection timed out
Oct 01 13:17:47 compute-0 ceph-mon[74802]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:17:47
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images']
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:17:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:49 compute-0 sshd-session[140493]: Accepted publickey for zuul from 192.168.122.30 port 51752 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:17:49 compute-0 systemd-logind[818]: New session 46 of user zuul.
Oct 01 13:17:49 compute-0 systemd[1]: Started Session 46 of User zuul.
Oct 01 13:17:49 compute-0 sshd-session[140493]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:17:49 compute-0 ceph-mon[74802]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:50 compute-0 python3.9[140646]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:17:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:51 compute-0 sudo[140800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmykgwzkxfpnziidjqovpkublwcosnxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324670.805767-34-163046225941594/AnsiballZ_file.py'
Oct 01 13:17:51 compute-0 sudo[140800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:51 compute-0 python3.9[140802]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:51 compute-0 sudo[140800]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:51 compute-0 ceph-mon[74802]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:51 compute-0 sudo[140952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deepitlmviemmggiqptxrdybgdjoupjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324671.647894-34-138057750347580/AnsiballZ_file.py'
Oct 01 13:17:51 compute-0 sudo[140952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:52 compute-0 python3.9[140954]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:17:52 compute-0 sudo[140952]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:53 compute-0 python3.9[141104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:17:53 compute-0 sudo[141254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqfzldcswehmrhwmuvegmbodfeszpflj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324673.2244227-57-219036482252952/AnsiballZ_seboolean.py'
Oct 01 13:17:53 compute-0 sudo[141254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:53 compute-0 ceph-mon[74802]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:53 compute-0 python3.9[141256]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 01 13:17:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:54 compute-0 ceph-mon[74802]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:55 compute-0 sudo[141254]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:17:55 compute-0 sudo[141411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frcnkswqgmejinsqpmzcqaoizfhdjwcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324675.3816433-67-165088525970729/AnsiballZ_setup.py'
Oct 01 13:17:55 compute-0 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 01 13:17:55 compute-0 sudo[141411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:56 compute-0 python3.9[141413]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:17:56 compute-0 sudo[141411]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:56 compute-0 sudo[141422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:17:56 compute-0 sudo[141422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:56 compute-0 sudo[141422]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:56 compute-0 sudo[141470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:17:56 compute-0 sudo[141470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:56 compute-0 sudo[141470]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:56 compute-0 sudo[141519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:17:56 compute-0 sudo[141519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:56 compute-0 sudo[141519]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:56 compute-0 sudo[141569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erpljulmyqfoqzntnuzwrugqorywfjnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324675.3816433-67-165088525970729/AnsiballZ_dnf.py'
Oct 01 13:17:56 compute-0 sudo[141569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:56 compute-0 sudo[141571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:17:56 compute-0 sudo[141571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:56 compute-0 ceph-mon[74802]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:17:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:17:57 compute-0 python3.9[141576]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:17:57 compute-0 sudo[141571]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:17:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:17:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:17:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:17:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:17:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:17:57 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fce599b7-8cfd-4618-a8c6-1036b12a7292 does not exist
Oct 01 13:17:57 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5502e1d0-5473-4cba-855f-88c888b08091 does not exist
Oct 01 13:17:57 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev de12ed83-7c77-4ca2-a97f-fcb3c553a2e0 does not exist
Oct 01 13:17:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:17:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:17:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:17:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:17:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:17:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:17:57 compute-0 sudo[141630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:17:57 compute-0 sudo[141630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:57 compute-0 sudo[141630]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:57 compute-0 sudo[141655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:17:57 compute-0 sudo[141655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:57 compute-0 sudo[141655]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:57 compute-0 sudo[141680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:17:57 compute-0 sudo[141680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:57 compute-0 sudo[141680]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:57 compute-0 sudo[141705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:17:57 compute-0 sudo[141705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:17:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:17:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:17:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:17:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:17:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:17:58 compute-0 podman[141771]: 2025-10-01 13:17:58.010285669 +0000 UTC m=+0.051936635 container create ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:17:58 compute-0 systemd[1]: Started libpod-conmon-ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd.scope.
Oct 01 13:17:58 compute-0 podman[141771]: 2025-10-01 13:17:57.995344951 +0000 UTC m=+0.036995917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:17:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:17:58 compute-0 podman[141771]: 2025-10-01 13:17:58.118567475 +0000 UTC m=+0.160218481 container init ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:17:58 compute-0 podman[141771]: 2025-10-01 13:17:58.12495998 +0000 UTC m=+0.166610976 container start ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:17:58 compute-0 naughty_diffie[141789]: 167 167
Oct 01 13:17:58 compute-0 podman[141771]: 2025-10-01 13:17:58.129316595 +0000 UTC m=+0.170967581 container attach ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:17:58 compute-0 systemd[1]: libpod-ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd.scope: Deactivated successfully.
Oct 01 13:17:58 compute-0 podman[141771]: 2025-10-01 13:17:58.129831981 +0000 UTC m=+0.171482967 container died ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2b6841a925c8eb6751ad4292e9408bc2873951fb7874aebbf1f69f4e916da8f-merged.mount: Deactivated successfully.
Oct 01 13:17:58 compute-0 podman[141771]: 2025-10-01 13:17:58.261562705 +0000 UTC m=+0.303213701 container remove ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:17:58 compute-0 systemd[1]: libpod-conmon-ba0dc824fdccc67befa79db5ccdd9b0bcd09ee87e268965c1307e2a0a194a1fd.scope: Deactivated successfully.
Oct 01 13:17:58 compute-0 sudo[141569]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:58 compute-0 podman[141837]: 2025-10-01 13:17:58.493493197 +0000 UTC m=+0.046332294 container create 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:17:58 compute-0 systemd[1]: Started libpod-conmon-5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8.scope.
Oct 01 13:17:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:17:58 compute-0 podman[141837]: 2025-10-01 13:17:58.474803242 +0000 UTC m=+0.027642359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:17:58 compute-0 podman[141837]: 2025-10-01 13:17:58.580186449 +0000 UTC m=+0.133025546 container init 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:17:58 compute-0 podman[141837]: 2025-10-01 13:17:58.587963317 +0000 UTC m=+0.140802414 container start 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:17:58 compute-0 podman[141837]: 2025-10-01 13:17:58.592102095 +0000 UTC m=+0.144941182 container attach 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:17:59 compute-0 sudo[141987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvsqnaevbojqrsbppvvtgmpihfvhcjpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324678.5351987-79-21894534977399/AnsiballZ_systemd.py'
Oct 01 13:17:59 compute-0 sudo[141987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:17:59 compute-0 ceph-mon[74802]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:59 compute-0 python3.9[141991]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 13:17:59 compute-0 quizzical_bassi[141876]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:17:59 compute-0 quizzical_bassi[141876]: --> relative data size: 1.0
Oct 01 13:17:59 compute-0 quizzical_bassi[141876]: --> All data devices are unavailable
Oct 01 13:17:59 compute-0 sudo[141987]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:59 compute-0 systemd[1]: libpod-5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8.scope: Deactivated successfully.
Oct 01 13:17:59 compute-0 systemd[1]: libpod-5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8.scope: Consumed 1.028s CPU time.
Oct 01 13:17:59 compute-0 podman[141837]: 2025-10-01 13:17:59.675328896 +0000 UTC m=+1.228167993 container died 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-29333549eb120fda0a0417bc66698ecc1a126f2deda321a80af8af48a6078792-merged.mount: Deactivated successfully.
Oct 01 13:17:59 compute-0 podman[141837]: 2025-10-01 13:17:59.745660914 +0000 UTC m=+1.298500001 container remove 5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:17:59 compute-0 systemd[1]: libpod-conmon-5f1fab7d89d045feb689ccf7c3311ed14821957c2f804d0ee24a1db7fc3dd2c8.scope: Deactivated successfully.
Oct 01 13:17:59 compute-0 sudo[141705]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:59 compute-0 sudo[142048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:17:59 compute-0 sudo[142048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:59 compute-0 sudo[142048]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:17:59 compute-0 sudo[142073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:17:59 compute-0 sudo[142073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:59 compute-0 sudo[142073]: pam_unix(sudo:session): session closed for user root
Oct 01 13:17:59 compute-0 sudo[142118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:17:59 compute-0 sudo[142118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:17:59 compute-0 sudo[142118]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:00 compute-0 sudo[142161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:18:00 compute-0 sudo[142161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:18:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:18:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2035 writes, 9080 keys, 2035 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2035 writes, 2035 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2035 writes, 9080 keys, 2035 commit groups, 1.0 writes per commit group, ingest: 11.41 MB, 0.02 MB/s
                                           Interval WAL: 2035 writes, 2035 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     40.2      0.21              0.03         3    0.071       0      0       0.0       0.0
                                             L6      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     83.8     74.1      0.19              0.05         2    0.094    7249    733       0.0       0.0
                                            Sum      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     39.3     56.1      0.40              0.08         5    0.080    7249    733       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     40.5     57.6      0.39              0.08         4    0.098    7249    733       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     83.8     74.1      0.19              0.05         2    0.094    7249    733       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     42.2      0.20              0.03         2    0.101       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.4 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 308.00 MB usage: 553.91 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(37,462.19 KB,0.146544%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,63.17 KB,0.0200296%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 13:18:00 compute-0 podman[142269]: 2025-10-01 13:18:00.376494715 +0000 UTC m=+0.038562075 container create b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:18:00 compute-0 systemd[1]: Started libpod-conmon-b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce.scope.
Oct 01 13:18:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:18:00 compute-0 sudo[142331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syjwkvfzryxjtzuudfhlzeeezukfewxf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759324679.948737-87-166874084413331/AnsiballZ_edpm_nftables_snippet.py'
Oct 01 13:18:00 compute-0 sudo[142331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:00 compute-0 podman[142269]: 2025-10-01 13:18:00.454010885 +0000 UTC m=+0.116078265 container init b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:18:00 compute-0 podman[142269]: 2025-10-01 13:18:00.360201115 +0000 UTC m=+0.022268505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:18:00 compute-0 podman[142269]: 2025-10-01 13:18:00.466021624 +0000 UTC m=+0.128089004 container start b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:18:00 compute-0 friendly_einstein[142329]: 167 167
Oct 01 13:18:00 compute-0 systemd[1]: libpod-b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce.scope: Deactivated successfully.
Oct 01 13:18:00 compute-0 podman[142269]: 2025-10-01 13:18:00.472279096 +0000 UTC m=+0.134346466 container attach b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:18:00 compute-0 podman[142269]: 2025-10-01 13:18:00.472539304 +0000 UTC m=+0.134606674 container died b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 13:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba8ee00d296a275af24909431f83e3be2061fbb28d069326a251761686e6938-merged.mount: Deactivated successfully.
Oct 01 13:18:00 compute-0 podman[142269]: 2025-10-01 13:18:00.519499816 +0000 UTC m=+0.181567186 container remove b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:18:00 compute-0 systemd[1]: libpod-conmon-b947d6cb0eee804bec39a2f10ee395cb4ab3adff46845dc03c32ad0d15edf6ce.scope: Deactivated successfully.
Oct 01 13:18:00 compute-0 python3[142334]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 01 13:18:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:00 compute-0 sudo[142331]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:00 compute-0 podman[142356]: 2025-10-01 13:18:00.70297383 +0000 UTC m=+0.041861317 container create c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:18:00 compute-0 systemd[1]: Started libpod-conmon-c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61.scope.
Oct 01 13:18:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:18:00 compute-0 podman[142356]: 2025-10-01 13:18:00.683562603 +0000 UTC m=+0.022450110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:18:00 compute-0 podman[142356]: 2025-10-01 13:18:00.807261602 +0000 UTC m=+0.146149099 container init c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:18:00 compute-0 podman[142356]: 2025-10-01 13:18:00.815568097 +0000 UTC m=+0.154455564 container start c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:18:00 compute-0 podman[142356]: 2025-10-01 13:18:00.820090996 +0000 UTC m=+0.158978483 container attach c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:18:01 compute-0 sudo[142527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyttamvdhmckmvayhiisxjkcoietxvrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324680.9341068-96-138700677499370/AnsiballZ_file.py'
Oct 01 13:18:01 compute-0 sudo[142527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:01 compute-0 ceph-mon[74802]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:01 compute-0 python3.9[142529]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:01 compute-0 sudo[142527]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:01 compute-0 boring_burnell[142397]: {
Oct 01 13:18:01 compute-0 boring_burnell[142397]:     "0": [
Oct 01 13:18:01 compute-0 boring_burnell[142397]:         {
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "devices": [
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "/dev/loop3"
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             ],
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_name": "ceph_lv0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_size": "21470642176",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "name": "ceph_lv0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "tags": {
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.cluster_name": "ceph",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.crush_device_class": "",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.encrypted": "0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.osd_id": "0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.type": "block",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.vdo": "0"
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             },
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "type": "block",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "vg_name": "ceph_vg0"
Oct 01 13:18:01 compute-0 boring_burnell[142397]:         }
Oct 01 13:18:01 compute-0 boring_burnell[142397]:     ],
Oct 01 13:18:01 compute-0 boring_burnell[142397]:     "1": [
Oct 01 13:18:01 compute-0 boring_burnell[142397]:         {
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "devices": [
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "/dev/loop4"
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             ],
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_name": "ceph_lv1",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_size": "21470642176",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "name": "ceph_lv1",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "tags": {
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.cluster_name": "ceph",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.crush_device_class": "",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.encrypted": "0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.osd_id": "1",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.type": "block",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.vdo": "0"
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             },
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "type": "block",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "vg_name": "ceph_vg1"
Oct 01 13:18:01 compute-0 boring_burnell[142397]:         }
Oct 01 13:18:01 compute-0 boring_burnell[142397]:     ],
Oct 01 13:18:01 compute-0 boring_burnell[142397]:     "2": [
Oct 01 13:18:01 compute-0 boring_burnell[142397]:         {
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "devices": [
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "/dev/loop5"
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             ],
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_name": "ceph_lv2",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_size": "21470642176",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "name": "ceph_lv2",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "tags": {
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.cluster_name": "ceph",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.crush_device_class": "",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.encrypted": "0",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.osd_id": "2",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.type": "block",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:                 "ceph.vdo": "0"
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             },
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "type": "block",
Oct 01 13:18:01 compute-0 boring_burnell[142397]:             "vg_name": "ceph_vg2"
Oct 01 13:18:01 compute-0 boring_burnell[142397]:         }
Oct 01 13:18:01 compute-0 boring_burnell[142397]:     ]
Oct 01 13:18:01 compute-0 boring_burnell[142397]: }
Oct 01 13:18:01 compute-0 systemd[1]: libpod-c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61.scope: Deactivated successfully.
Oct 01 13:18:01 compute-0 podman[142356]: 2025-10-01 13:18:01.61486697 +0000 UTC m=+0.953754477 container died c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb338979ac36f6a3a66b3d41d7cdb5f0ae0087880bc399b511e8820b21f08b6-merged.mount: Deactivated successfully.
Oct 01 13:18:01 compute-0 podman[142356]: 2025-10-01 13:18:01.700514289 +0000 UTC m=+1.039401776 container remove c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_burnell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:18:01 compute-0 systemd[1]: libpod-conmon-c2aff9b2afbdeb66bcb2db1064dbd6a7b3d7ff8b2aeaebea5e6d819106a8df61.scope: Deactivated successfully.
Oct 01 13:18:01 compute-0 sudo[142161]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:01 compute-0 sudo[142622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:18:01 compute-0 sudo[142622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:18:01 compute-0 sudo[142622]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:01 compute-0 sudo[142647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:18:01 compute-0 sudo[142647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:18:01 compute-0 sudo[142647]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:01 compute-0 sudo[142672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:18:01 compute-0 sudo[142672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:18:01 compute-0 sudo[142672]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:02 compute-0 sudo[142697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:18:02 compute-0 sudo[142697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:18:02 compute-0 sudo[142802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awkfsvholhzlzdhbotzbilonybkdnezk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324681.6874776-104-206493423147268/AnsiballZ_stat.py'
Oct 01 13:18:02 compute-0 sudo[142802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:02 compute-0 python3.9[142808]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:02 compute-0 podman[142837]: 2025-10-01 13:18:02.440916454 +0000 UTC m=+0.050252694 container create 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:18:02 compute-0 sudo[142802]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:02 compute-0 systemd[1]: Started libpod-conmon-847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85.scope.
Oct 01 13:18:02 compute-0 podman[142837]: 2025-10-01 13:18:02.417518806 +0000 UTC m=+0.026855106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:18:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:18:02 compute-0 podman[142837]: 2025-10-01 13:18:02.530190485 +0000 UTC m=+0.139526745 container init 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 13:18:02 compute-0 podman[142837]: 2025-10-01 13:18:02.536629543 +0000 UTC m=+0.145965783 container start 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:18:02 compute-0 podman[142837]: 2025-10-01 13:18:02.541844833 +0000 UTC m=+0.151181113 container attach 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:18:02 compute-0 jovial_nightingale[142855]: 167 167
Oct 01 13:18:02 compute-0 systemd[1]: libpod-847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85.scope: Deactivated successfully.
Oct 01 13:18:02 compute-0 podman[142837]: 2025-10-01 13:18:02.545748473 +0000 UTC m=+0.155084713 container died 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-baadd02aa82d2c5e2f03e5f094fcea1f561743305bad6941fb6d7c089ba504e2-merged.mount: Deactivated successfully.
Oct 01 13:18:02 compute-0 podman[142837]: 2025-10-01 13:18:02.588231097 +0000 UTC m=+0.197567327 container remove 847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nightingale, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:18:02 compute-0 systemd[1]: libpod-conmon-847554598ff962c1ddeab02ee664d0184dca03594a2e2e7bfd4b15bc94dbff85.scope: Deactivated successfully.
Oct 01 13:18:02 compute-0 sudo[142952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktynsbypucrqbvdfrgcjzycfyfvisuaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324681.6874776-104-206493423147268/AnsiballZ_file.py'
Oct 01 13:18:02 compute-0 sudo[142952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:02 compute-0 podman[142939]: 2025-10-01 13:18:02.792409417 +0000 UTC m=+0.049928544 container create ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 13:18:02 compute-0 systemd[1]: Started libpod-conmon-ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631.scope.
Oct 01 13:18:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:18:02 compute-0 podman[142939]: 2025-10-01 13:18:02.774658672 +0000 UTC m=+0.032177829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:18:02 compute-0 podman[142939]: 2025-10-01 13:18:02.88010511 +0000 UTC m=+0.137624277 container init ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:18:02 compute-0 podman[142939]: 2025-10-01 13:18:02.886429914 +0000 UTC m=+0.143949051 container start ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 13:18:02 compute-0 podman[142939]: 2025-10-01 13:18:02.890095716 +0000 UTC m=+0.147614873 container attach ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:18:02 compute-0 python3.9[142959]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:02 compute-0 sudo[142952]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:03 compute-0 ceph-mon[74802]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:03 compute-0 sudo[143129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qibdxnzcfngmjpxlqqrjfyndzhiorzuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324683.2403724-116-45977403777093/AnsiballZ_stat.py'
Oct 01 13:18:03 compute-0 sudo[143129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:03 compute-0 python3.9[143133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:03 compute-0 sudo[143129]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]: {
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "osd_id": 0,
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "type": "bluestore"
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:     },
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "osd_id": 2,
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "type": "bluestore"
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:     },
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "osd_id": 1,
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:         "type": "bluestore"
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]:     }
Oct 01 13:18:03 compute-0 pensive_elgamal[142970]: }
Oct 01 13:18:03 compute-0 systemd[1]: libpod-ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631.scope: Deactivated successfully.
Oct 01 13:18:03 compute-0 podman[142939]: 2025-10-01 13:18:03.908500067 +0000 UTC m=+1.166019204 container died ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 13:18:03 compute-0 systemd[1]: libpod-ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631.scope: Consumed 1.030s CPU time.
Oct 01 13:18:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a8a461b6538ec98ef715f940c62ce70858ff400a25696feb1e186ede535dd43-merged.mount: Deactivated successfully.
Oct 01 13:18:03 compute-0 podman[142939]: 2025-10-01 13:18:03.983650175 +0000 UTC m=+1.241169302 container remove ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elgamal, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:18:03 compute-0 systemd[1]: libpod-conmon-ed30406e0030e27129b5992061a12be09f095663b8a068d1f8dd8989032f3631.scope: Deactivated successfully.
Oct 01 13:18:04 compute-0 sudo[142697]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:18:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:18:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:18:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:18:04 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 4fa6f51c-faf3-4bc5-ae67-9a0e3d0ab705 does not exist
Oct 01 13:18:04 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3bca51a4-ab30-4b88-9dc2-0cb238e8ae60 does not exist
Oct 01 13:18:04 compute-0 sudo[143244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkmbysfjrkamcrszhnxgarllwlgnmaxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324683.2403724-116-45977403777093/AnsiballZ_file.py'
Oct 01 13:18:04 compute-0 sudo[143244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:04 compute-0 sudo[143243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:18:04 compute-0 sudo[143243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:18:04 compute-0 sudo[143243]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:04 compute-0 sudo[143271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:18:04 compute-0 sudo[143271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:18:04 compute-0 sudo[143271]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:04 compute-0 python3.9[143266]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xkgi35zp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:04 compute-0 sudo[143244]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:04 compute-0 sudo[143445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsmobedugvylibfgszhjoexrbpewpzwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324684.4619293-128-215108087354744/AnsiballZ_stat.py'
Oct 01 13:18:04 compute-0 sudo[143445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:04 compute-0 python3.9[143447]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:05 compute-0 sudo[143445]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:05 compute-0 ceph-mon[74802]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:18:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:18:05 compute-0 sudo[143523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsteyizsonzhxwqqnfknxmuclojqmtrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324684.4619293-128-215108087354744/AnsiballZ_file.py'
Oct 01 13:18:05 compute-0 sudo[143523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:05 compute-0 python3.9[143525]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:05 compute-0 sudo[143523]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:06 compute-0 sudo[143675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzbecbluovzfkscgvcatpqcgulzatsce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324685.7846591-141-276065198703810/AnsiballZ_command.py'
Oct 01 13:18:06 compute-0 sudo[143675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:06 compute-0 python3.9[143677]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:18:06 compute-0 sudo[143675]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:07 compute-0 ceph-mon[74802]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:07 compute-0 sudo[143828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urmquaxsihflxjcfqjbqtxmtwjiaikjl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759324686.7425191-149-179078443615279/AnsiballZ_edpm_nftables_from_files.py'
Oct 01 13:18:07 compute-0 sudo[143828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:07 compute-0 python3[143830]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 01 13:18:07 compute-0 sudo[143828]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:08 compute-0 sudo[143980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txaoionqzerdgkcaqcpvboejexuwqkyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324687.704618-157-104862870495958/AnsiballZ_stat.py'
Oct 01 13:18:08 compute-0 sudo[143980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:08 compute-0 python3.9[143982]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:08 compute-0 sudo[143980]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:09 compute-0 ceph-mon[74802]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:09 compute-0 sudo[144105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlikojmikqxkbaraggdhmesuzjxolikd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324687.704618-157-104862870495958/AnsiballZ_copy.py'
Oct 01 13:18:09 compute-0 sudo[144105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:09 compute-0 python3.9[144107]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324687.704618-157-104862870495958/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:09 compute-0 sudo[144105]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:09 compute-0 sudo[144257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvqrfmzkehfwvsfmpxthrktpbvpjtpkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324689.547799-172-172170631842449/AnsiballZ_stat.py'
Oct 01 13:18:09 compute-0 sudo[144257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:10 compute-0 python3.9[144259]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:10 compute-0 sudo[144257]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:10 compute-0 sudo[144382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xulytecmiqrpdikwyxizmzizkffsrnth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324689.547799-172-172170631842449/AnsiballZ_copy.py'
Oct 01 13:18:10 compute-0 sudo[144382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:10 compute-0 python3.9[144384]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324689.547799-172-172170631842449/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:10 compute-0 sudo[144382]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:11 compute-0 ceph-mon[74802]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:11 compute-0 sudo[144534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evbgfwcatwbhnmsepfpnluobikyjmpji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324691.0129192-187-95134133507345/AnsiballZ_stat.py'
Oct 01 13:18:11 compute-0 sudo[144534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:11 compute-0 python3.9[144536]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:11 compute-0 sudo[144534]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:12 compute-0 sudo[144659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osaopjzzywgaoikjdxlqaqwpmtwjfzeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324691.0129192-187-95134133507345/AnsiballZ_copy.py'
Oct 01 13:18:12 compute-0 sudo[144659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:12 compute-0 python3.9[144661]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324691.0129192-187-95134133507345/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:12 compute-0 sudo[144659]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:12 compute-0 sudo[144811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbgojnnglxayfbyjmngzboejmuyqefnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324692.5405037-202-249147039233566/AnsiballZ_stat.py'
Oct 01 13:18:12 compute-0 sudo[144811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:13 compute-0 python3.9[144813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:13 compute-0 sudo[144811]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:13 compute-0 ceph-mon[74802]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:13 compute-0 sudo[144936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmxyllghmwpnwojrpumaarcxhdgjsfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324692.5405037-202-249147039233566/AnsiballZ_copy.py'
Oct 01 13:18:13 compute-0 sudo[144936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:13 compute-0 python3.9[144938]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324692.5405037-202-249147039233566/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:13 compute-0 sudo[144936]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:14 compute-0 sudo[145088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfufmhnamrarievhbxnzpxbufpyzzlew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324693.9516916-217-209569732680396/AnsiballZ_stat.py'
Oct 01 13:18:14 compute-0 sudo[145088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:14 compute-0 python3.9[145090]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:14 compute-0 sudo[145088]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:15 compute-0 sudo[145213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guppffswrfhjwlmvfaicjdxxglgglvze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324693.9516916-217-209569732680396/AnsiballZ_copy.py'
Oct 01 13:18:15 compute-0 sudo[145213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:15 compute-0 ceph-mon[74802]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:15 compute-0 python3.9[145215]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759324693.9516916-217-209569732680396/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:15 compute-0 sudo[145213]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:16 compute-0 sudo[145365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsuzkzcoumoedytaryecgejdxukkwgbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324695.807082-232-254851881281314/AnsiballZ_file.py'
Oct 01 13:18:16 compute-0 sudo[145365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:16 compute-0 python3.9[145367]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:16 compute-0 sudo[145365]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:17 compute-0 sudo[145517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyhrzorxqttgonzvdqrsxthtjucahojy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324696.6527836-240-159858981438476/AnsiballZ_command.py'
Oct 01 13:18:17 compute-0 sudo[145517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:17 compute-0 python3.9[145519]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:18:17 compute-0 sudo[145517]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:17 compute-0 ceph-mon[74802]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:18:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:18 compute-0 sudo[145672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kffkdkrmzouctvorzhocmprwdkvbcabg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324697.5488787-248-220533306575470/AnsiballZ_blockinfile.py'
Oct 01 13:18:18 compute-0 sudo[145672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:18 compute-0 python3.9[145674]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:18 compute-0 sudo[145672]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:18 compute-0 sudo[145824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygfmffvhqldsrsgxeoymxeedeqvrwxxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324698.6037846-257-95130175287210/AnsiballZ_command.py'
Oct 01 13:18:18 compute-0 sudo[145824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:19 compute-0 python3.9[145826]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:18:19 compute-0 sudo[145824]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:19 compute-0 ceph-mon[74802]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:19 compute-0 sudo[145977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqtvhryuhfejdwnsitctzkpdhipufuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324699.4433455-265-235067222266836/AnsiballZ_stat.py'
Oct 01 13:18:19 compute-0 sudo[145977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:19 compute-0 python3.9[145979]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:18:19 compute-0 sudo[145977]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:20 compute-0 sudo[146131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fibbrgndwbrccnrmssbbkzretzurjoar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324700.1744487-273-151078684684149/AnsiballZ_command.py'
Oct 01 13:18:20 compute-0 sudo[146131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:20 compute-0 python3.9[146133]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:18:20 compute-0 sudo[146131]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:21 compute-0 sudo[146286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlntzvqooddcurjvqqhggpbvvbyfabfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324700.9449484-281-81874823866917/AnsiballZ_file.py'
Oct 01 13:18:21 compute-0 sudo[146286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:21 compute-0 python3.9[146288]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:21 compute-0 sudo[146286]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:21 compute-0 ceph-mon[74802]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:22 compute-0 python3.9[146438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:18:22 compute-0 ceph-mon[74802]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:23 compute-0 sshd-session[146439]: Invalid user seekcy from 156.236.31.46 port 44426
Oct 01 13:18:23 compute-0 sshd-session[146439]: Received disconnect from 156.236.31.46 port 44426:11: Bye Bye [preauth]
Oct 01 13:18:23 compute-0 sshd-session[146439]: Disconnected from invalid user seekcy 156.236.31.46 port 44426 [preauth]
Oct 01 13:18:23 compute-0 sudo[146591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iarjmyrvljwmyvhtwfhwigubiazfdvly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324703.4262614-321-58054060140705/AnsiballZ_command.py'
Oct 01 13:18:23 compute-0 sudo[146591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:23 compute-0 python3.9[146593]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:74:f6:ca:ec" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:18:23 compute-0 ovs-vsctl[146594]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:74:f6:ca:ec external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 01 13:18:23 compute-0 sudo[146591]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:24 compute-0 sudo[146744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhzljloigpdjfizjrndanmkqwogrikg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324704.249817-330-155718180906931/AnsiballZ_command.py'
Oct 01 13:18:24 compute-0 sudo[146744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:24 compute-0 python3.9[146746]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:18:24 compute-0 sudo[146744]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:25 compute-0 ceph-mon[74802]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:25 compute-0 sudo[146899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shqgqgcxoebijaxiogiiblcbljgspppx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324705.0332236-338-271744760209819/AnsiballZ_command.py'
Oct 01 13:18:25 compute-0 sudo[146899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:25 compute-0 python3.9[146901]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:18:25 compute-0 ovs-vsctl[146902]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 01 13:18:25 compute-0 sudo[146899]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:26 compute-0 python3.9[147052]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:18:27 compute-0 sudo[147204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtooeqfdxuujzczgeovkkkjwjyktxcpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324706.6706197-355-68111148620657/AnsiballZ_file.py'
Oct 01 13:18:27 compute-0 sudo[147204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:27 compute-0 ceph-mon[74802]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:27 compute-0 python3.9[147206]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:18:27 compute-0 sudo[147204]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:27 compute-0 sudo[147357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymicdaahugpvlpgdkiwxenyurnnmgirk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324707.4360378-363-8139408845680/AnsiballZ_stat.py'
Oct 01 13:18:27 compute-0 sudo[147357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:27 compute-0 python3.9[147360]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:28 compute-0 sudo[147357]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:28 compute-0 sudo[147436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzcmgldijnzuxyqcibeywjlllzvvegpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324707.4360378-363-8139408845680/AnsiballZ_file.py'
Oct 01 13:18:28 compute-0 sudo[147436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:28 compute-0 sshd-session[147356]: Invalid user jenkins from 200.7.101.139 port 55100
Oct 01 13:18:28 compute-0 sshd-session[147356]: Received disconnect from 200.7.101.139 port 55100:11: Bye Bye [preauth]
Oct 01 13:18:28 compute-0 sshd-session[147356]: Disconnected from invalid user jenkins 200.7.101.139 port 55100 [preauth]
Oct 01 13:18:28 compute-0 python3.9[147438]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:18:28 compute-0 sudo[147436]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:28 compute-0 sudo[147588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nysyydafzgqfnqybmlnfixnmlhbcrljd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324708.6033878-363-202431656335332/AnsiballZ_stat.py'
Oct 01 13:18:28 compute-0 sudo[147588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:29 compute-0 ceph-mon[74802]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:29 compute-0 python3.9[147590]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:29 compute-0 sudo[147588]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:29 compute-0 sudo[147666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osyuxwovrnotcssxcytrpfdeinlsnynm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324708.6033878-363-202431656335332/AnsiballZ_file.py'
Oct 01 13:18:29 compute-0 sudo[147666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:29 compute-0 python3.9[147668]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:18:29 compute-0 sudo[147666]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:30 compute-0 sudo[147818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiotijhbjqgryyfiskmfnsljdhyizzjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324709.91724-386-81758141740242/AnsiballZ_file.py'
Oct 01 13:18:30 compute-0 sudo[147818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:30 compute-0 python3.9[147820]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:30 compute-0 sudo[147818]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:31 compute-0 ceph-mon[74802]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:31 compute-0 sudo[147971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgsanqmkkubskkpowbqlgqsdtlunzyjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324710.7424045-394-194898395487488/AnsiballZ_stat.py'
Oct 01 13:18:31 compute-0 sudo[147971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:31 compute-0 python3.9[147973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:31 compute-0 sudo[147971]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:31 compute-0 sudo[148050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydgtgabxntfyhyqvhlcyqhtyelhqlcdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324710.7424045-394-194898395487488/AnsiballZ_file.py'
Oct 01 13:18:31 compute-0 sudo[148050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:31 compute-0 python3.9[148052]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:31 compute-0 sudo[148050]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:32 compute-0 sudo[148202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cplknpupnwfelfkcyehbygqebfgyvuao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324712.013739-406-63102189202172/AnsiballZ_stat.py'
Oct 01 13:18:32 compute-0 sudo[148202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:32 compute-0 python3.9[148204]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:32 compute-0 sudo[148202]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:32 compute-0 sudo[148280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldtapgbwxuilmfuqgskkwjrrjhcwfuaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324712.013739-406-63102189202172/AnsiballZ_file.py'
Oct 01 13:18:32 compute-0 sudo[148280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:33 compute-0 python3.9[148282]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:33 compute-0 sudo[148280]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:33 compute-0 ceph-mon[74802]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:33 compute-0 sudo[148432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsexntltwgnyhtdrwuafktbwhsafjlqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324713.2052162-418-25635418057246/AnsiballZ_systemd.py'
Oct 01 13:18:33 compute-0 sudo[148432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:33 compute-0 python3.9[148434]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:18:33 compute-0 systemd[1]: Reloading.
Oct 01 13:18:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:33 compute-0 systemd-sysv-generator[148466]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:18:33 compute-0 systemd-rc-local-generator[148462]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:18:34 compute-0 sudo[148432]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:34 compute-0 sudo[148621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gycnzwbqztfajhyjgsfoibbuarskxloq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324714.430557-426-64400656139811/AnsiballZ_stat.py'
Oct 01 13:18:34 compute-0 sudo[148621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:34 compute-0 python3.9[148623]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:34 compute-0 sudo[148621]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:35 compute-0 ceph-mon[74802]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:35 compute-0 sudo[148699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yadieobcnrwhevcqenwjkgeuiemdgbvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324714.430557-426-64400656139811/AnsiballZ_file.py'
Oct 01 13:18:35 compute-0 sudo[148699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:35 compute-0 python3.9[148701]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:35 compute-0 sudo[148699]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:35 compute-0 sudo[148851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwphlvdlqbbyzwvxdgogtuqwscoeakhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324715.6518147-438-52172559752045/AnsiballZ_stat.py'
Oct 01 13:18:35 compute-0 sudo[148851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:36 compute-0 python3.9[148853]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:36 compute-0 sudo[148851]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:36 compute-0 sudo[148929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdlxhikbhumqnfuvrlzmpunqyoqbzxmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324715.6518147-438-52172559752045/AnsiballZ_file.py'
Oct 01 13:18:36 compute-0 sudo[148929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:36 compute-0 python3.9[148931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:36 compute-0 sudo[148929]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:37 compute-0 ceph-mon[74802]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:37 compute-0 sudo[149081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjfdxuxqfhovzlojjbidkzovnsnfitsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324716.8857677-450-98939574142150/AnsiballZ_systemd.py'
Oct 01 13:18:37 compute-0 sudo[149081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:37 compute-0 python3.9[149083]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:18:37 compute-0 systemd[1]: Reloading.
Oct 01 13:18:37 compute-0 systemd-sysv-generator[149113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:18:37 compute-0 systemd-rc-local-generator[149109]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:18:37 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 13:18:37 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 13:18:37 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 13:18:37 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 13:18:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:37 compute-0 sudo[149081]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:38 compute-0 sudo[149275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bidfwyvevrumjtgjwlebrikzikmxgytt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324718.209516-460-7120409009547/AnsiballZ_file.py'
Oct 01 13:18:38 compute-0 sudo[149275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:38 compute-0 python3.9[149277]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:18:38 compute-0 sudo[149275]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:39 compute-0 ceph-mon[74802]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:39 compute-0 sudo[149427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjbawkjnqgyixalmmilyiomgkoihhypy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324718.9331353-468-219059036755941/AnsiballZ_stat.py'
Oct 01 13:18:39 compute-0 sudo[149427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:39 compute-0 python3.9[149429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:39 compute-0 sudo[149427]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:39 compute-0 sudo[149550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzugizzgfjjfdthrdgipgqlfxozpdhlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324718.9331353-468-219059036755941/AnsiballZ_copy.py'
Oct 01 13:18:39 compute-0 sudo[149550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:40 compute-0 python3.9[149552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324718.9331353-468-219059036755941/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:18:40 compute-0 sudo[149550]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:40 compute-0 sudo[149702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdntcsbjikqwghvgtzrmccpbiyzbzgjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324720.483555-485-131887327611000/AnsiballZ_file.py'
Oct 01 13:18:40 compute-0 sudo[149702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:40 compute-0 python3.9[149704]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:18:40 compute-0 sudo[149702]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:41 compute-0 ceph-mon[74802]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:41 compute-0 sudo[149856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkwukjojccbozcmazpneplbucotmthti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324721.2204993-493-121768827405071/AnsiballZ_stat.py'
Oct 01 13:18:41 compute-0 sudo[149856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:41 compute-0 python3.9[149858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:18:41 compute-0 sudo[149856]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:42 compute-0 sudo[149979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaqgukcsdakqpsvhzvqyhjjyoobvksns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324721.2204993-493-121768827405071/AnsiballZ_copy.py'
Oct 01 13:18:42 compute-0 sudo[149979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:42 compute-0 sshd-session[149851]: Invalid user seekcy from 80.253.31.232 port 49924
Oct 01 13:18:42 compute-0 python3.9[149981]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324721.2204993-493-121768827405071/.source.json _original_basename=.gxqdhbbp follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:42 compute-0 sshd-session[149851]: Received disconnect from 80.253.31.232 port 49924:11: Bye Bye [preauth]
Oct 01 13:18:42 compute-0 sshd-session[149851]: Disconnected from invalid user seekcy 80.253.31.232 port 49924 [preauth]
Oct 01 13:18:42 compute-0 sudo[149979]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:42 compute-0 sudo[150131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyfrgixekxxvoapvqbkqhrwzaoftpvoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324722.4777036-508-136185377525575/AnsiballZ_file.py'
Oct 01 13:18:42 compute-0 sudo[150131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:42 compute-0 python3.9[150133]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:43 compute-0 sudo[150131]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:43 compute-0 ceph-mon[74802]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.117522) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723117600, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 734, "num_deletes": 251, "total_data_size": 927993, "memory_usage": 941960, "flush_reason": "Manual Compaction"}
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723126701, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 919656, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9060, "largest_seqno": 9793, "table_properties": {"data_size": 915846, "index_size": 1590, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8143, "raw_average_key_size": 18, "raw_value_size": 908285, "raw_average_value_size": 2068, "num_data_blocks": 74, "num_entries": 439, "num_filter_entries": 439, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324661, "oldest_key_time": 1759324661, "file_creation_time": 1759324723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9297 microseconds, and 5464 cpu microseconds.
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.126796) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 919656 bytes OK
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.126851) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.128199) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.128223) EVENT_LOG_v1 {"time_micros": 1759324723128216, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.128246) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 924236, prev total WAL file size 924236, number of live WAL files 2.
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.129002) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(898KB)], [23(6919KB)]
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723129046, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8005512, "oldest_snapshot_seqno": -1}
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3330 keys, 6306410 bytes, temperature: kUnknown
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723177421, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6306410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6281720, "index_size": 15237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 80730, "raw_average_key_size": 24, "raw_value_size": 6219110, "raw_average_value_size": 1867, "num_data_blocks": 663, "num_entries": 3330, "num_filter_entries": 3330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759324723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.177784) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6306410 bytes
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.179430) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 130.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.8 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(15.6) write-amplify(6.9) OK, records in: 3844, records dropped: 514 output_compression: NoCompression
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.179464) EVENT_LOG_v1 {"time_micros": 1759324723179448, "job": 8, "event": "compaction_finished", "compaction_time_micros": 48470, "compaction_time_cpu_micros": 28158, "output_level": 6, "num_output_files": 1, "total_output_size": 6306410, "num_input_records": 3844, "num_output_records": 3330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723179995, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324723183004, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.128901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:18:43 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:18:43.183181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:18:43 compute-0 sudo[150284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcfdofzvumndrhvtmzrzamsiawkjcxhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324723.2878761-516-57803744689821/AnsiballZ_stat.py'
Oct 01 13:18:43 compute-0 sudo[150284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:43 compute-0 sudo[150284]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:44 compute-0 sudo[150408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpnwlqlrpsujrnntabqspjeznbhvystc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324723.2878761-516-57803744689821/AnsiballZ_copy.py'
Oct 01 13:18:44 compute-0 sudo[150408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:44 compute-0 sudo[150408]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:45 compute-0 ceph-mon[74802]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:45 compute-0 sudo[150560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dknrptcoitztkocmwhmnyytotfzzesco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324724.8323495-533-241017629676396/AnsiballZ_container_config_data.py'
Oct 01 13:18:45 compute-0 sudo[150560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:45 compute-0 python3.9[150562]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 01 13:18:45 compute-0 sudo[150560]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:46 compute-0 sudo[150712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaclfrpvwuylahlrntqkhipiznsohvkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324725.8635767-542-264807169556167/AnsiballZ_container_config_hash.py'
Oct 01 13:18:46 compute-0 sudo[150712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:46 compute-0 python3.9[150714]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 13:18:46 compute-0 sudo[150712]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:47 compute-0 sudo[150864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khajsysktozpnaioibtkzuxdnhdybdxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324726.880403-551-27682178547595/AnsiballZ_podman_container_info.py'
Oct 01 13:18:47 compute-0 sudo[150864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:47 compute-0 ceph-mon[74802]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:47 compute-0 python3.9[150866]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:18:47
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms']
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:18:47 compute-0 sudo[150864]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:18:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:48 compute-0 sudo[151042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzfvnvopvziffwkcaapfocmuybwtjjsu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759324728.34951-564-41528976302317/AnsiballZ_edpm_container_manage.py'
Oct 01 13:18:48 compute-0 sudo[151042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:49 compute-0 python3[151044]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 13:18:49 compute-0 ceph-mon[74802]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:51 compute-0 sshd-session[151072]: Invalid user brian from 27.254.137.144 port 60414
Oct 01 13:18:51 compute-0 sshd-session[151072]: Received disconnect from 27.254.137.144 port 60414:11: Bye Bye [preauth]
Oct 01 13:18:51 compute-0 sshd-session[151072]: Disconnected from invalid user brian 27.254.137.144 port 60414 [preauth]
Oct 01 13:18:51 compute-0 ceph-mon[74802]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:53 compute-0 ceph-mon[74802]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:54 compute-0 podman[151059]: 2025-10-01 13:18:54.932719318 +0000 UTC m=+5.679547091 image pull 7ffac6b06b247caf26cf673b775a5f070f2fa1a6008cf0b0964af7e905ba86a5 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd
Oct 01 13:18:55 compute-0 podman[151182]: 2025-10-01 13:18:55.166351173 +0000 UTC m=+0.077020012 container create 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 13:18:55 compute-0 podman[151182]: 2025-10-01 13:18:55.128391874 +0000 UTC m=+0.039060764 image pull 7ffac6b06b247caf26cf673b775a5f070f2fa1a6008cf0b0964af7e905ba86a5 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd
Oct 01 13:18:55 compute-0 python3[151044]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd
Oct 01 13:18:55 compute-0 sshd-session[150233]: ssh_dispatch_run_fatal: Connection from 202.103.55.158 port 60520: Connection timed out [preauth]
Oct 01 13:18:55 compute-0 sudo[151042]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:55 compute-0 ceph-mon[74802]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:18:55 compute-0 sudo[151370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svmafokvbudbukgkfvbenpftxusxwahs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324735.519035-572-214556811252230/AnsiballZ_stat.py'
Oct 01 13:18:55 compute-0 sudo[151370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:56 compute-0 python3.9[151372]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:18:56 compute-0 sudo[151370]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:56 compute-0 sudo[151524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptolzcfazidhnerfczlmqjcimkyedexv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324736.3907275-581-36349830093575/AnsiballZ_file.py'
Oct 01 13:18:56 compute-0 sudo[151524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:18:56 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:18:57 compute-0 python3.9[151526]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:57 compute-0 sudo[151524]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:57 compute-0 sudo[151600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqbackfdbpekxtplywmgpkixhhidfgej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324736.3907275-581-36349830093575/AnsiballZ_stat.py'
Oct 01 13:18:57 compute-0 sudo[151600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:57 compute-0 python3.9[151602]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:18:57 compute-0 sudo[151600]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:57 compute-0 ceph-mon[74802]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:58 compute-0 sudo[151751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkbyvgxuyphzsshdzkmfoccephuosiqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324737.5084484-581-130711057280542/AnsiballZ_copy.py'
Oct 01 13:18:58 compute-0 sudo[151751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:58 compute-0 python3.9[151753]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324737.5084484-581-130711057280542/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:18:58 compute-0 sudo[151751]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:58 compute-0 sudo[151827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sewaayoktbzkijpxnecxsmenbwbmncyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324737.5084484-581-130711057280542/AnsiballZ_systemd.py'
Oct 01 13:18:58 compute-0 sudo[151827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:58 compute-0 python3.9[151829]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 13:18:58 compute-0 systemd[1]: Reloading.
Oct 01 13:18:58 compute-0 systemd-sysv-generator[151860]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:18:58 compute-0 systemd-rc-local-generator[151856]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:18:59 compute-0 sudo[151827]: pam_unix(sudo:session): session closed for user root
Oct 01 13:18:59 compute-0 sudo[151938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmimtxylmhlhhehbzyjipilxntxrmzsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324737.5084484-581-130711057280542/AnsiballZ_systemd.py'
Oct 01 13:18:59 compute-0 sudo[151938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:18:59 compute-0 ceph-mon[74802]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:59 compute-0 python3.9[151940]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:18:59 compute-0 systemd[1]: Reloading.
Oct 01 13:18:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:18:59 compute-0 systemd-sysv-generator[151971]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:18:59 compute-0 systemd-rc-local-generator[151967]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:19:00 compute-0 systemd[1]: Starting ovn_controller container...
Oct 01 13:19:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/987770384d734843490fc415fb2ee473e75f002af4dc1b07e5543afd997383f6/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:01 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad.
Oct 01 13:19:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:02 compute-0 podman[151980]: 2025-10-01 13:19:02.492292584 +0000 UTC m=+2.260379636 container init 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Oct 01 13:19:02 compute-0 ovn_controller[151996]: + sudo -E kolla_set_configs
Oct 01 13:19:02 compute-0 podman[151980]: 2025-10-01 13:19:02.529628252 +0000 UTC m=+2.297715214 container start 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:19:02 compute-0 ceph-mon[74802]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:02 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 01 13:19:02 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 01 13:19:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 01 13:19:02 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 01 13:19:02 compute-0 systemd[152019]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 01 13:19:02 compute-0 edpm-start-podman-container[151980]: ovn_controller
Oct 01 13:19:02 compute-0 systemd[152019]: Queued start job for default target Main User Target.
Oct 01 13:19:02 compute-0 systemd[152019]: Created slice User Application Slice.
Oct 01 13:19:02 compute-0 systemd[152019]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 01 13:19:02 compute-0 systemd[152019]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 13:19:02 compute-0 systemd[152019]: Reached target Paths.
Oct 01 13:19:02 compute-0 systemd[152019]: Reached target Timers.
Oct 01 13:19:02 compute-0 systemd[152019]: Starting D-Bus User Message Bus Socket...
Oct 01 13:19:02 compute-0 systemd[152019]: Starting Create User's Volatile Files and Directories...
Oct 01 13:19:02 compute-0 systemd[152019]: Finished Create User's Volatile Files and Directories.
Oct 01 13:19:02 compute-0 systemd[152019]: Listening on D-Bus User Message Bus Socket.
Oct 01 13:19:02 compute-0 systemd[152019]: Reached target Sockets.
Oct 01 13:19:02 compute-0 systemd[152019]: Reached target Basic System.
Oct 01 13:19:02 compute-0 systemd[152019]: Reached target Main User Target.
Oct 01 13:19:02 compute-0 systemd[152019]: Startup finished in 194ms.
Oct 01 13:19:02 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 01 13:19:02 compute-0 systemd[1]: Started Session c1 of User root.
Oct 01 13:19:02 compute-0 podman[152006]: 2025-10-01 13:19:02.881676245 +0000 UTC m=+0.334369050 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:19:02 compute-0 systemd[1]: 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad-6a7a58905eeb289d.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 13:19:02 compute-0 systemd[1]: 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad-6a7a58905eeb289d.service: Failed with result 'exit-code'.
Oct 01 13:19:02 compute-0 edpm-start-podman-container[151979]: Creating additional drop-in dependency for "ovn_controller" (583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad)
Oct 01 13:19:02 compute-0 systemd[1]: Reloading.
Oct 01 13:19:02 compute-0 ovn_controller[151996]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 13:19:02 compute-0 ovn_controller[151996]: INFO:__main__:Validating config file
Oct 01 13:19:02 compute-0 ovn_controller[151996]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 13:19:02 compute-0 ovn_controller[151996]: INFO:__main__:Writing out command to execute
Oct 01 13:19:02 compute-0 ovn_controller[151996]: ++ cat /run_command
Oct 01 13:19:02 compute-0 ovn_controller[151996]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 01 13:19:02 compute-0 ovn_controller[151996]: + ARGS=
Oct 01 13:19:02 compute-0 ovn_controller[151996]: + sudo kolla_copy_cacerts
Oct 01 13:19:03 compute-0 systemd-rc-local-generator[152087]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:19:03 compute-0 systemd-sysv-generator[152094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:19:03 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 01 13:19:03 compute-0 systemd[1]: Started ovn_controller container.
Oct 01 13:19:03 compute-0 systemd[1]: Started Session c2 of User root.
Oct 01 13:19:03 compute-0 sudo[151938]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:03 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 01 13:19:03 compute-0 ovn_controller[151996]: + [[ ! -n '' ]]
Oct 01 13:19:03 compute-0 ovn_controller[151996]: + . kolla_extend_start
Oct 01 13:19:03 compute-0 ovn_controller[151996]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 01 13:19:03 compute-0 ovn_controller[151996]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 01 13:19:03 compute-0 ovn_controller[151996]: + umask 0022
Oct 01 13:19:03 compute-0 ovn_controller[151996]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 01 13:19:03 compute-0 NetworkManager[45411]: <info>  [1759324743.4524] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 01 13:19:03 compute-0 NetworkManager[45411]: <info>  [1759324743.4531] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 01 13:19:03 compute-0 NetworkManager[45411]: <info>  [1759324743.4542] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 01 13:19:03 compute-0 NetworkManager[45411]: <info>  [1759324743.4547] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 01 13:19:03 compute-0 NetworkManager[45411]: <info>  [1759324743.4550] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 01 13:19:03 compute-0 kernel: br-int: entered promiscuous mode
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00017|main|INFO|OVS feature set changed, force recompute.
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00023|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 01 13:19:03 compute-0 NetworkManager[45411]: <info>  [1759324743.4832] manager: (ovn-35ad8f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 01 13:19:03 compute-0 systemd-udevd[152135]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 13:19:03 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 01 13:19:03 compute-0 systemd-udevd[152137]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 13:19:03 compute-0 NetworkManager[45411]: <info>  [1759324743.5096] device (genev_sys_6081): carrier: link connected
Oct 01 13:19:03 compute-0 NetworkManager[45411]: <info>  [1759324743.5099] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 01 13:19:03 compute-0 ovn_controller[151996]: 2025-10-01T13:19:03Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 01 13:19:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:03 compute-0 ceph-mon[74802]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:03 compute-0 sudo[152266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrzwepoyslkbykeqxzbwhltsofwekeqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324743.6468008-609-147385533844626/AnsiballZ_command.py'
Oct 01 13:19:03 compute-0 sudo[152266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:04 compute-0 python3.9[152268]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:19:04 compute-0 ovs-vsctl[152269]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 01 13:19:04 compute-0 sudo[152266]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:04 compute-0 sudo[152270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:19:04 compute-0 sudo[152270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:04 compute-0 sudo[152270]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:04 compute-0 sudo[152319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:19:04 compute-0 sudo[152319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:04 compute-0 sudo[152319]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:04 compute-0 sudo[152344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:19:04 compute-0 sudo[152344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:04 compute-0 sudo[152344]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:04 compute-0 sudo[152396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:19:04 compute-0 sudo[152396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:04 compute-0 sudo[152533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klblxkueyftyspjoymvzfaddgmojiajl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324744.3975964-617-73871268204349/AnsiballZ_command.py'
Oct 01 13:19:04 compute-0 sudo[152533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:04 compute-0 ceph-mon[74802]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:04 compute-0 python3.9[152535]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:19:04 compute-0 ovs-vsctl[152542]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 01 13:19:05 compute-0 sudo[152533]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:05 compute-0 sudo[152396]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:19:05 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:19:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:19:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:19:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:19:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:19:05 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e3abbc6c-6ef7-4a4f-bb38-92e73a6a4964 does not exist
Oct 01 13:19:05 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 1e59db72-3cae-45dc-939c-48ce1f44f12d does not exist
Oct 01 13:19:05 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2e2a1ae8-85de-4eab-b1ce-96f87ecc05db does not exist
Oct 01 13:19:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:19:05 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:19:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:19:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:19:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:19:05 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:19:05 compute-0 sudo[152580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:19:05 compute-0 sudo[152580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:05 compute-0 sudo[152580]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:05 compute-0 sudo[152605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:19:05 compute-0 sudo[152605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:05 compute-0 sudo[152605]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:05 compute-0 sudo[152630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:19:05 compute-0 sudo[152630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:05 compute-0 sudo[152630]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:05 compute-0 sudo[152661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:19:05 compute-0 sudo[152661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:05 compute-0 sudo[152845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpjwvpjgwddyexmvqczrqlxntdrqiodf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324745.4513535-631-230919851053044/AnsiballZ_command.py'
Oct 01 13:19:05 compute-0 sudo[152845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:05 compute-0 podman[152847]: 2025-10-01 13:19:05.909951253 +0000 UTC m=+0.063285483 container create af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:19:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:19:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:19:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:19:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:19:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:19:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:19:05 compute-0 systemd[1]: Started libpod-conmon-af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f.scope.
Oct 01 13:19:05 compute-0 podman[152847]: 2025-10-01 13:19:05.881590715 +0000 UTC m=+0.034925025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:19:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:19:06 compute-0 podman[152847]: 2025-10-01 13:19:06.021486985 +0000 UTC m=+0.174821295 container init af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:19:06 compute-0 podman[152847]: 2025-10-01 13:19:06.033239653 +0000 UTC m=+0.186573903 container start af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:19:06 compute-0 podman[152847]: 2025-10-01 13:19:06.039606531 +0000 UTC m=+0.192940841 container attach af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:19:06 compute-0 wizardly_meninsky[152865]: 167 167
Oct 01 13:19:06 compute-0 systemd[1]: libpod-af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f.scope: Deactivated successfully.
Oct 01 13:19:06 compute-0 podman[152847]: 2025-10-01 13:19:06.044152064 +0000 UTC m=+0.197486324 container died af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:19:06 compute-0 python3.9[152849]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:19:06 compute-0 ovs-vsctl[152871]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 01 13:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-a57294c35eb46c72b698fe12b21fb4d1f884da6ef82bd9293ba1d1b9cf0d2636-merged.mount: Deactivated successfully.
Oct 01 13:19:06 compute-0 sudo[152845]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:06 compute-0 podman[152847]: 2025-10-01 13:19:06.108882411 +0000 UTC m=+0.262216641 container remove af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:19:06 compute-0 systemd[1]: libpod-conmon-af0f4b193f2b88c325b31ffcceec798ba036fe24ef6976e4b881de0a9fa0ab7f.scope: Deactivated successfully.
Oct 01 13:19:06 compute-0 podman[152914]: 2025-10-01 13:19:06.299794859 +0000 UTC m=+0.041855642 container create e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:19:06 compute-0 systemd[1]: Started libpod-conmon-e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d.scope.
Oct 01 13:19:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:19:06 compute-0 podman[152914]: 2025-10-01 13:19:06.280835965 +0000 UTC m=+0.022896748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:06 compute-0 podman[152914]: 2025-10-01 13:19:06.395518655 +0000 UTC m=+0.137579428 container init e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:19:06 compute-0 podman[152914]: 2025-10-01 13:19:06.411211487 +0000 UTC m=+0.153272260 container start e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:19:06 compute-0 podman[152914]: 2025-10-01 13:19:06.415277684 +0000 UTC m=+0.157338447 container attach e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:19:06 compute-0 sshd-session[140496]: Connection closed by 192.168.122.30 port 51752
Oct 01 13:19:06 compute-0 sshd-session[140493]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:19:06 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Oct 01 13:19:06 compute-0 systemd[1]: session-46.scope: Consumed 1min 2.620s CPU time.
Oct 01 13:19:06 compute-0 systemd-logind[818]: Session 46 logged out. Waiting for processes to exit.
Oct 01 13:19:06 compute-0 systemd-logind[818]: Removed session 46.
Oct 01 13:19:07 compute-0 ceph-mon[74802]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:07 compute-0 sharp_haslett[152930]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:19:07 compute-0 sharp_haslett[152930]: --> relative data size: 1.0
Oct 01 13:19:07 compute-0 sharp_haslett[152930]: --> All data devices are unavailable
Oct 01 13:19:07 compute-0 systemd[1]: libpod-e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d.scope: Deactivated successfully.
Oct 01 13:19:07 compute-0 systemd[1]: libpod-e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d.scope: Consumed 1.099s CPU time.
Oct 01 13:19:07 compute-0 podman[152959]: 2025-10-01 13:19:07.600321239 +0000 UTC m=+0.032539430 container died e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-637626288da7f111e82e5e3074c542f174cdb5c7aee2eeecbd19912c4b202953-merged.mount: Deactivated successfully.
Oct 01 13:19:07 compute-0 podman[152959]: 2025-10-01 13:19:07.660635317 +0000 UTC m=+0.092853498 container remove e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:19:07 compute-0 systemd[1]: libpod-conmon-e0340a64f71998f776b2c5eed94ec193e70765e74f606c4431f04f9e0a28476d.scope: Deactivated successfully.
Oct 01 13:19:07 compute-0 sudo[152661]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:07 compute-0 sudo[152974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:19:07 compute-0 sudo[152974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:07 compute-0 sudo[152974]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:07 compute-0 sudo[152999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:19:07 compute-0 sudo[152999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:07 compute-0 sudo[152999]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:07 compute-0 sudo[153024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:19:07 compute-0 sudo[153024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:07 compute-0 sudo[153024]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:08 compute-0 sudo[153049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:19:08 compute-0 sudo[153049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:08 compute-0 podman[153116]: 2025-10-01 13:19:08.454717471 +0000 UTC m=+0.046748156 container create 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:19:08 compute-0 systemd[1]: Started libpod-conmon-006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b.scope.
Oct 01 13:19:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:19:08 compute-0 podman[153116]: 2025-10-01 13:19:08.434361453 +0000 UTC m=+0.026392178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:19:08 compute-0 podman[153116]: 2025-10-01 13:19:08.529933906 +0000 UTC m=+0.121964591 container init 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:19:08 compute-0 podman[153116]: 2025-10-01 13:19:08.537896025 +0000 UTC m=+0.129926740 container start 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:19:08 compute-0 podman[153116]: 2025-10-01 13:19:08.541971003 +0000 UTC m=+0.134001708 container attach 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:19:08 compute-0 wizardly_swirles[153132]: 167 167
Oct 01 13:19:08 compute-0 systemd[1]: libpod-006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b.scope: Deactivated successfully.
Oct 01 13:19:08 compute-0 podman[153116]: 2025-10-01 13:19:08.543925324 +0000 UTC m=+0.135956029 container died 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f75ca87cf9737f1571aef45ca8eeb4043542f4a63a78b58a3c691e2f2e6b2c9c-merged.mount: Deactivated successfully.
Oct 01 13:19:08 compute-0 podman[153116]: 2025-10-01 13:19:08.591642258 +0000 UTC m=+0.183672993 container remove 006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_swirles, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:19:08 compute-0 systemd[1]: libpod-conmon-006a9c6bb43e5840155b6118c32b0a16a01fba3ddcef1a71cb5bf919e6ae3e6b.scope: Deactivated successfully.
Oct 01 13:19:08 compute-0 podman[153156]: 2025-10-01 13:19:08.790112242 +0000 UTC m=+0.051337569 container create 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:19:08 compute-0 systemd[1]: Started libpod-conmon-6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17.scope.
Oct 01 13:19:08 compute-0 podman[153156]: 2025-10-01 13:19:08.760389442 +0000 UTC m=+0.021614849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:19:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:08 compute-0 podman[153156]: 2025-10-01 13:19:08.882285458 +0000 UTC m=+0.143510875 container init 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:19:08 compute-0 podman[153156]: 2025-10-01 13:19:08.894677686 +0000 UTC m=+0.155903043 container start 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:19:08 compute-0 podman[153156]: 2025-10-01 13:19:08.899081024 +0000 UTC m=+0.160306441 container attach 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:19:09 compute-0 ceph-mon[74802]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]: {
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:     "0": [
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:         {
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "devices": [
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "/dev/loop3"
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             ],
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_name": "ceph_lv0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_size": "21470642176",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "name": "ceph_lv0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "tags": {
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.cluster_name": "ceph",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.crush_device_class": "",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.encrypted": "0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.osd_id": "0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.type": "block",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.vdo": "0"
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             },
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "type": "block",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "vg_name": "ceph_vg0"
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:         }
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:     ],
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:     "1": [
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:         {
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "devices": [
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "/dev/loop4"
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             ],
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_name": "ceph_lv1",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_size": "21470642176",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "name": "ceph_lv1",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "tags": {
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.cluster_name": "ceph",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.crush_device_class": "",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.encrypted": "0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.osd_id": "1",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.type": "block",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.vdo": "0"
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             },
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "type": "block",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "vg_name": "ceph_vg1"
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:         }
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:     ],
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:     "2": [
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:         {
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "devices": [
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "/dev/loop5"
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             ],
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_name": "ceph_lv2",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_size": "21470642176",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "name": "ceph_lv2",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "tags": {
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.cluster_name": "ceph",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.crush_device_class": "",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.encrypted": "0",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.osd_id": "2",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.type": "block",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:                 "ceph.vdo": "0"
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             },
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "type": "block",
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:             "vg_name": "ceph_vg2"
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:         }
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]:     ]
Oct 01 13:19:09 compute-0 inspiring_ramanujan[153173]: }
Oct 01 13:19:09 compute-0 systemd[1]: libpod-6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17.scope: Deactivated successfully.
Oct 01 13:19:09 compute-0 podman[153156]: 2025-10-01 13:19:09.654355942 +0000 UTC m=+0.915581299 container died 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c90ecb47e861a9a01b745fdbd1f57fc543977794728813b05afacac02f526718-merged.mount: Deactivated successfully.
Oct 01 13:19:09 compute-0 podman[153156]: 2025-10-01 13:19:09.727149641 +0000 UTC m=+0.988374968 container remove 6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:19:09 compute-0 systemd[1]: libpod-conmon-6afcac6c04e56529ecf4f325121d818b27dfb8b334612cdc591d69c96c710e17.scope: Deactivated successfully.
Oct 01 13:19:09 compute-0 sudo[153049]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:09 compute-0 sudo[153196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:19:09 compute-0 sudo[153196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:09 compute-0 sudo[153196]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:09 compute-0 sudo[153221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:19:09 compute-0 sudo[153221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:09 compute-0 sudo[153221]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:09 compute-0 sudo[153246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:19:09 compute-0 sudo[153246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:09 compute-0 sudo[153246]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:10 compute-0 sudo[153271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:19:10 compute-0 sudo[153271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:10 compute-0 podman[153336]: 2025-10-01 13:19:10.306352666 +0000 UTC m=+0.036495944 container create 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:19:10 compute-0 systemd[1]: Started libpod-conmon-456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5.scope.
Oct 01 13:19:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:19:10 compute-0 podman[153336]: 2025-10-01 13:19:10.370355051 +0000 UTC m=+0.100498339 container init 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 13:19:10 compute-0 podman[153336]: 2025-10-01 13:19:10.381568162 +0000 UTC m=+0.111711440 container start 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:19:10 compute-0 podman[153336]: 2025-10-01 13:19:10.384993498 +0000 UTC m=+0.115136786 container attach 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:19:10 compute-0 intelligent_stonebraker[153353]: 167 167
Oct 01 13:19:10 compute-0 podman[153336]: 2025-10-01 13:19:10.290276333 +0000 UTC m=+0.020419621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:19:10 compute-0 systemd[1]: libpod-456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5.scope: Deactivated successfully.
Oct 01 13:19:10 compute-0 podman[153336]: 2025-10-01 13:19:10.387562749 +0000 UTC m=+0.117706057 container died 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a69f6547bb4e9fe1169109da329509b67591caf573217383f2ab1061c8bb5eeb-merged.mount: Deactivated successfully.
Oct 01 13:19:10 compute-0 podman[153336]: 2025-10-01 13:19:10.426597271 +0000 UTC m=+0.156740559 container remove 456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:19:10 compute-0 systemd[1]: libpod-conmon-456cf46014d6ef5bee5e09c2b42691874cc2e59e3f4565b01408e5b3e09987d5.scope: Deactivated successfully.
Oct 01 13:19:10 compute-0 podman[153378]: 2025-10-01 13:19:10.594856629 +0000 UTC m=+0.040927022 container create b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:19:10 compute-0 systemd[1]: Started libpod-conmon-b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0.scope.
Oct 01 13:19:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:19:10 compute-0 podman[153378]: 2025-10-01 13:19:10.574788921 +0000 UTC m=+0.020859294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:19:10 compute-0 podman[153378]: 2025-10-01 13:19:10.681867284 +0000 UTC m=+0.127937717 container init b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 13:19:10 compute-0 podman[153378]: 2025-10-01 13:19:10.69930492 +0000 UTC m=+0.145375263 container start b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:19:10 compute-0 podman[153378]: 2025-10-01 13:19:10.702755198 +0000 UTC m=+0.148825561 container attach b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:19:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:11 compute-0 ceph-mon[74802]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:11 compute-0 zealous_jang[153395]: {
Oct 01 13:19:11 compute-0 zealous_jang[153395]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "osd_id": 0,
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "type": "bluestore"
Oct 01 13:19:11 compute-0 zealous_jang[153395]:     },
Oct 01 13:19:11 compute-0 zealous_jang[153395]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "osd_id": 2,
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "type": "bluestore"
Oct 01 13:19:11 compute-0 zealous_jang[153395]:     },
Oct 01 13:19:11 compute-0 zealous_jang[153395]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "osd_id": 1,
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:19:11 compute-0 zealous_jang[153395]:         "type": "bluestore"
Oct 01 13:19:11 compute-0 zealous_jang[153395]:     }
Oct 01 13:19:11 compute-0 zealous_jang[153395]: }
Oct 01 13:19:11 compute-0 systemd[1]: libpod-b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0.scope: Deactivated successfully.
Oct 01 13:19:11 compute-0 systemd[1]: libpod-b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0.scope: Consumed 1.081s CPU time.
Oct 01 13:19:11 compute-0 podman[153378]: 2025-10-01 13:19:11.774200226 +0000 UTC m=+1.220270649 container died b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8ebe456a8af087a8672adbcb9ae5ace290aff1150d181b89cb20386e237493e-merged.mount: Deactivated successfully.
Oct 01 13:19:11 compute-0 podman[153378]: 2025-10-01 13:19:11.834342569 +0000 UTC m=+1.280412912 container remove b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:19:11 compute-0 systemd[1]: libpod-conmon-b75c56a255b9e1dec561bc7a0496339fcf197884c0949ec780e0457eb124bce0.scope: Deactivated successfully.
Oct 01 13:19:11 compute-0 sudo[153271]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:19:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:19:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:19:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:19:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev b2e848ff-da64-45d2-96fe-b6d5c03e9fea does not exist
Oct 01 13:19:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 16fbf88a-310f-4c1f-9d06-9e2e5cbb8293 does not exist
Oct 01 13:19:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:11 compute-0 sudo[153442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:19:11 compute-0 sudo[153442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:12 compute-0 sudo[153442]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:12 compute-0 sudo[153467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:19:12 compute-0 sudo[153467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:19:12 compute-0 sudo[153467]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:12 compute-0 sshd-session[153492]: Accepted publickey for zuul from 192.168.122.30 port 39068 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:19:12 compute-0 systemd-logind[818]: New session 48 of user zuul.
Oct 01 13:19:12 compute-0 systemd[1]: Started Session 48 of User zuul.
Oct 01 13:19:12 compute-0 sshd-session[153492]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:19:12 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:19:12 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:19:12 compute-0 ceph-mon[74802]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:13 compute-0 python3.9[153645]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:19:13 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 01 13:19:13 compute-0 systemd[152019]: Activating special unit Exit the Session...
Oct 01 13:19:13 compute-0 systemd[152019]: Stopped target Main User Target.
Oct 01 13:19:13 compute-0 systemd[152019]: Stopped target Basic System.
Oct 01 13:19:13 compute-0 systemd[152019]: Stopped target Paths.
Oct 01 13:19:13 compute-0 systemd[152019]: Stopped target Sockets.
Oct 01 13:19:13 compute-0 systemd[152019]: Stopped target Timers.
Oct 01 13:19:13 compute-0 systemd[152019]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 01 13:19:13 compute-0 systemd[152019]: Closed D-Bus User Message Bus Socket.
Oct 01 13:19:13 compute-0 systemd[152019]: Stopped Create User's Volatile Files and Directories.
Oct 01 13:19:13 compute-0 systemd[152019]: Removed slice User Application Slice.
Oct 01 13:19:13 compute-0 systemd[152019]: Reached target Shutdown.
Oct 01 13:19:13 compute-0 systemd[152019]: Finished Exit the Session.
Oct 01 13:19:13 compute-0 systemd[152019]: Reached target Exit the Session.
Oct 01 13:19:13 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 01 13:19:13 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 01 13:19:13 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 01 13:19:13 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 01 13:19:13 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 01 13:19:13 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 01 13:19:13 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 01 13:19:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:14 compute-0 sudo[153801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xntccstroytgbkrripooahpogaryasmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324753.775246-34-113082510393967/AnsiballZ_file.py'
Oct 01 13:19:14 compute-0 sudo[153801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:14 compute-0 python3.9[153803]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:14 compute-0 sudo[153801]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:14 compute-0 sudo[153953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmownttynbkifekokmwtlabrbybyaufc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324754.5439186-34-169665853843648/AnsiballZ_file.py'
Oct 01 13:19:14 compute-0 sudo[153953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:15 compute-0 ceph-mon[74802]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:15 compute-0 python3.9[153955]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:15 compute-0 sudo[153953]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:15 compute-0 sudo[154105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhrhpayuauynrhezvlhpjywzkhsnlbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324755.2851179-34-86011725060323/AnsiballZ_file.py'
Oct 01 13:19:15 compute-0 sudo[154105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:15 compute-0 python3.9[154107]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:15 compute-0 sudo[154105]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:16 compute-0 sudo[154257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inxxtdihemmgluyzgrqzzjneihcoyhog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324756.0882776-34-73783209977171/AnsiballZ_file.py'
Oct 01 13:19:16 compute-0 sudo[154257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:16 compute-0 python3.9[154259]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:16 compute-0 sudo[154257]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:17 compute-0 ceph-mon[74802]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:17 compute-0 sudo[154409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piirwxojszuatgcghttgjyxuahxtofew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324756.7854922-34-193213453957430/AnsiballZ_file.py'
Oct 01 13:19:17 compute-0 sudo[154409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:17 compute-0 python3.9[154411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:17 compute-0 sudo[154409]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:19:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:18 compute-0 python3.9[154561]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:19:18 compute-0 sudo[154711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyjduguyvrlqrgdenfgzkoanggbybpwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324758.432095-78-24118686797594/AnsiballZ_seboolean.py'
Oct 01 13:19:18 compute-0 sudo[154711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:19 compute-0 ceph-mon[74802]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:19 compute-0 python3.9[154713]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 01 13:19:19 compute-0 sudo[154711]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:20 compute-0 python3.9[154863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:21 compute-0 ceph-mon[74802]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:21 compute-0 python3.9[154984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324759.9217083-86-212900004093909/.source follow=False _original_basename=haproxy.j2 checksum=3032b37a17ecbb7a27e901a243b96261ef70a559 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:22 compute-0 python3.9[155135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:22 compute-0 python3.9[155256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324761.5796463-101-31830838153653/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:23 compute-0 ceph-mon[74802]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:23 compute-0 sudo[155406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjhulzxbcoebwraeuqopsqjntcczbnyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324762.9581432-118-83013487269007/AnsiballZ_setup.py'
Oct 01 13:19:23 compute-0 sudo[155406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:23 compute-0 python3.9[155408]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:19:23 compute-0 sudo[155406]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:24 compute-0 sudo[155490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzndnwplqedesbkdexsegrjbhzokfepq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324762.9581432-118-83013487269007/AnsiballZ_dnf.py'
Oct 01 13:19:24 compute-0 sudo[155490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:24 compute-0 python3.9[155492]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:19:25 compute-0 ceph-mon[74802]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:25 compute-0 sudo[155490]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:26 compute-0 sudo[155643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxrgiwtvrepybkogsvpxxniotyumdwfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324765.85409-130-207331533665001/AnsiballZ_systemd.py'
Oct 01 13:19:26 compute-0 sudo[155643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:26 compute-0 python3.9[155645]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 13:19:26 compute-0 sudo[155643]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:27 compute-0 ceph-mon[74802]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:27 compute-0 python3.9[155798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:28 compute-0 python3.9[155919]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324767.0593774-138-92313042679548/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:28 compute-0 python3.9[156069]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:29 compute-0 ceph-mon[74802]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:29 compute-0 python3.9[156192]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324768.370188-138-268189513629070/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:29 compute-0 sshd-session[156140]: Invalid user admin from 80.94.95.25 port 50315
Oct 01 13:19:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:30 compute-0 sshd-session[156140]: Received disconnect from 80.94.95.25 port 50315:11: Bye [preauth]
Oct 01 13:19:30 compute-0 sshd-session[156140]: Disconnected from invalid user admin 80.94.95.25 port 50315 [preauth]
Oct 01 13:19:30 compute-0 python3.9[156342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:31 compute-0 ceph-mon[74802]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:31 compute-0 python3.9[156463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324770.268938-182-106148326016263/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:32 compute-0 python3.9[156613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:32 compute-0 python3.9[156734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324771.6564763-182-139132528206125/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:33 compute-0 ovn_controller[151996]: 2025-10-01T13:19:33Z|00025|memory|INFO|16128 kB peak resident set size after 29.8 seconds
Oct 01 13:19:33 compute-0 ovn_controller[151996]: 2025-10-01T13:19:33Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct 01 13:19:33 compute-0 podman[156858]: 2025-10-01 13:19:33.245859375 +0000 UTC m=+0.136842225 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 01 13:19:33 compute-0 ceph-mon[74802]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:33 compute-0 python3.9[156900]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:19:33 compute-0 sudo[157063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opmuuxkeoliahysaonomajleetcqsyrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324773.552618-220-220409640753306/AnsiballZ_file.py'
Oct 01 13:19:33 compute-0 sudo[157063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:34 compute-0 python3.9[157065]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:34 compute-0 sudo[157063]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:34 compute-0 sudo[157215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auvkihyudgmsaphtmacenityeptzvgtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324774.1921039-228-96126570323097/AnsiballZ_stat.py'
Oct 01 13:19:34 compute-0 sudo[157215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:34 compute-0 python3.9[157217]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:34 compute-0 sudo[157215]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:35 compute-0 sudo[157295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiliealhyqfcngbizdbnxlegcejuzxua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324774.1921039-228-96126570323097/AnsiballZ_file.py'
Oct 01 13:19:35 compute-0 sudo[157295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:35 compute-0 python3.9[157297]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:35 compute-0 sudo[157295]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:35 compute-0 ceph-mon[74802]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:35 compute-0 sshd-session[157266]: Invalid user av from 156.236.31.46 port 44514
Oct 01 13:19:35 compute-0 sshd-session[157266]: Received disconnect from 156.236.31.46 port 44514:11: Bye Bye [preauth]
Oct 01 13:19:35 compute-0 sshd-session[157266]: Disconnected from invalid user av 156.236.31.46 port 44514 [preauth]
Oct 01 13:19:35 compute-0 sudo[157447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jybkajflbqhpoomrbgtvzchvwwuuyaek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324775.4409246-228-196713697849792/AnsiballZ_stat.py'
Oct 01 13:19:35 compute-0 sudo[157447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:35 compute-0 python3.9[157449]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:35 compute-0 sudo[157447]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:36 compute-0 sudo[157525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pncvxqmxskgtknrjrokbggolpcjtwgkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324775.4409246-228-196713697849792/AnsiballZ_file.py'
Oct 01 13:19:36 compute-0 sudo[157525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:36 compute-0 python3.9[157527]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:36 compute-0 sudo[157525]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:36 compute-0 sudo[157677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yprsqqcejjjjgrjzwbidwhoyfgqhtmzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324776.6159754-251-272041928034338/AnsiballZ_file.py'
Oct 01 13:19:36 compute-0 sudo[157677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:37 compute-0 python3.9[157679]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:19:37 compute-0 sudo[157677]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:37 compute-0 ceph-mon[74802]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:37 compute-0 sudo[157829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuabvulewuoyoqbjpoisycgckgbopbhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324777.3373241-259-98763834597757/AnsiballZ_stat.py'
Oct 01 13:19:37 compute-0 sudo[157829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:37 compute-0 python3.9[157831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:37 compute-0 sudo[157829]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:38 compute-0 sudo[157907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tajxebtbzfgtxluhblgknpgnugwwchjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324777.3373241-259-98763834597757/AnsiballZ_file.py'
Oct 01 13:19:38 compute-0 sudo[157907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:38 compute-0 python3.9[157909]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:19:38 compute-0 sudo[157907]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:38 compute-0 sudo[158059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkmyatiecccqiqcsweprrahztzngxcct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324778.6481361-271-80772230010332/AnsiballZ_stat.py'
Oct 01 13:19:38 compute-0 sudo[158059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:39 compute-0 python3.9[158061]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:39 compute-0 sudo[158059]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:39 compute-0 ceph-mon[74802]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:39 compute-0 sudo[158137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyvrhgxnrwsfzldcyanifgvwclsibxwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324778.6481361-271-80772230010332/AnsiballZ_file.py'
Oct 01 13:19:39 compute-0 sudo[158137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:39 compute-0 python3.9[158139]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:19:39 compute-0 sudo[158137]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:40 compute-0 sudo[158289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euiofualdvmovdjxfmybcvwkedocnwmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324779.7951872-283-63585877185432/AnsiballZ_systemd.py'
Oct 01 13:19:40 compute-0 sudo[158289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:40 compute-0 python3.9[158291]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:19:40 compute-0 systemd[1]: Reloading.
Oct 01 13:19:40 compute-0 systemd-sysv-generator[158324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:19:40 compute-0 systemd-rc-local-generator[158321]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:19:40 compute-0 sudo[158289]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:41 compute-0 sshd-session[158329]: Invalid user mtvps1 from 80.253.31.232 port 43272
Oct 01 13:19:41 compute-0 sudo[158482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzadrqbstvagypxyqpkvxgvfxuxwlvoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324780.822762-291-62523540446072/AnsiballZ_stat.py'
Oct 01 13:19:41 compute-0 sudo[158482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:41 compute-0 sshd-session[158329]: Received disconnect from 80.253.31.232 port 43272:11: Bye Bye [preauth]
Oct 01 13:19:41 compute-0 sshd-session[158329]: Disconnected from invalid user mtvps1 80.253.31.232 port 43272 [preauth]
Oct 01 13:19:41 compute-0 python3.9[158484]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:41 compute-0 sudo[158482]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:41 compute-0 ceph-mon[74802]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:41 compute-0 sudo[158560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfwlibuipuztmaaehkqvmsvslnjzofel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324780.822762-291-62523540446072/AnsiballZ_file.py'
Oct 01 13:19:41 compute-0 sudo[158560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:41 compute-0 python3.9[158562]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:19:41 compute-0 sudo[158560]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:42 compute-0 sudo[158712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osxyuowobujtsvaofgxfosuptagtmwxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324782.0931795-303-7022189173180/AnsiballZ_stat.py'
Oct 01 13:19:42 compute-0 sudo[158712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:42 compute-0 python3.9[158714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:42 compute-0 sudo[158712]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:42 compute-0 sudo[158792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yasfzdifzplpcdrhrkbrzwnofrhnlhvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324782.0931795-303-7022189173180/AnsiballZ_file.py'
Oct 01 13:19:42 compute-0 sudo[158792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:43 compute-0 python3.9[158794]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:19:43 compute-0 sudo[158792]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:43 compute-0 ceph-mon[74802]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:43 compute-0 sshd-session[158775]: Invalid user seekcy from 200.7.101.139 port 33834
Oct 01 13:19:43 compute-0 sshd-session[158775]: Received disconnect from 200.7.101.139 port 33834:11: Bye Bye [preauth]
Oct 01 13:19:43 compute-0 sshd-session[158775]: Disconnected from invalid user seekcy 200.7.101.139 port 33834 [preauth]
Oct 01 13:19:43 compute-0 sudo[158944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcaawcpuehidazljylvrywhpethdbhds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324783.4264097-315-60711556710228/AnsiballZ_systemd.py'
Oct 01 13:19:43 compute-0 sudo[158944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:44 compute-0 python3.9[158946]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:19:44 compute-0 systemd[1]: Reloading.
Oct 01 13:19:44 compute-0 systemd-sysv-generator[158978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:19:44 compute-0 systemd-rc-local-generator[158972]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:19:44 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 13:19:44 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 13:19:44 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 13:19:44 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 13:19:44 compute-0 sudo[158944]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:45 compute-0 sudo[159137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngzztoyblauwdzinussorojhxalxttmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324784.7274034-325-255623980493389/AnsiballZ_file.py'
Oct 01 13:19:45 compute-0 sudo[159137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:45 compute-0 python3.9[159139]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:45 compute-0 sudo[159137]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:45 compute-0 ceph-mon[74802]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:45 compute-0 sudo[159289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfdeicbfcyfxxjptlzzpagdnsdonaiav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324785.4708555-333-269869075503420/AnsiballZ_stat.py'
Oct 01 13:19:45 compute-0 sudo[159289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:46 compute-0 python3.9[159291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:46 compute-0 sudo[159289]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:46 compute-0 sudo[159412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwiabmrpxdmfcfpcbpleiqyfkrbjkagg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324785.4708555-333-269869075503420/AnsiballZ_copy.py'
Oct 01 13:19:46 compute-0 sudo[159412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:46 compute-0 python3.9[159414]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759324785.4708555-333-269869075503420/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:46 compute-0 sudo[159412]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:47 compute-0 ceph-mon[74802]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:47 compute-0 sudo[159564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnhgljndipriydccuiumjhnbiefgimko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324787.1843567-350-219346237131436/AnsiballZ_file.py'
Oct 01 13:19:47 compute-0 sudo[159564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:19:47
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms']
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:19:47 compute-0 python3.9[159566]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:19:47 compute-0 sudo[159564]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:19:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:48 compute-0 sudo[159716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuofrqfyemayhxjbvfoonoylhpvkqbpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324787.9780657-358-80100632460010/AnsiballZ_stat.py'
Oct 01 13:19:48 compute-0 sudo[159716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:48 compute-0 python3.9[159718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:19:48 compute-0 sudo[159716]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:48 compute-0 sudo[159839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdqopxxondijrmbwdxjfqxnwucosdmek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324787.9780657-358-80100632460010/AnsiballZ_copy.py'
Oct 01 13:19:48 compute-0 sudo[159839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:49 compute-0 python3.9[159841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759324787.9780657-358-80100632460010/.source.json _original_basename=.6hxcsszs follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:19:49 compute-0 sudo[159839]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:49 compute-0 ceph-mon[74802]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:49 compute-0 sudo[159991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojarwoejjdarnoptpvffqhthtpurumbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324789.255222-373-235210480791797/AnsiballZ_file.py'
Oct 01 13:19:49 compute-0 sudo[159991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:49 compute-0 python3.9[159993]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:19:49 compute-0 sudo[159991]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:50 compute-0 auditd[705]: Audit daemon rotating log files
Oct 01 13:19:50 compute-0 sudo[160143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpvhnxoygbiyateidglnburopdjwizpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324790.0581028-381-19827041075574/AnsiballZ_stat.py'
Oct 01 13:19:50 compute-0 sudo[160143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:50 compute-0 sudo[160143]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:51 compute-0 sudo[160266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjniwunedpciqfwvvywvicigzrcqxowl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324790.0581028-381-19827041075574/AnsiballZ_copy.py'
Oct 01 13:19:51 compute-0 sudo[160266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:51 compute-0 sudo[160266]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:51 compute-0 ceph-mon[74802]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:51 compute-0 sudo[160418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygoagwzjozgidbaoqwkdvfwggbfwkfph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324791.5010338-398-72825313676884/AnsiballZ_container_config_data.py'
Oct 01 13:19:51 compute-0 sudo[160418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:52 compute-0 python3.9[160420]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 01 13:19:52 compute-0 sudo[160418]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:52 compute-0 sudo[160570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcjzepinnejplqbpuchkkrsumvkjkeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324792.367411-407-157559561962576/AnsiballZ_container_config_hash.py'
Oct 01 13:19:52 compute-0 sudo[160570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:53 compute-0 python3.9[160572]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 13:19:53 compute-0 sudo[160570]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:53 compute-0 ceph-mon[74802]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:53 compute-0 sudo[160722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytjebakdghdmjylqdxjyofyexfyvwwvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324793.3451169-416-84304822970705/AnsiballZ_podman_container_info.py'
Oct 01 13:19:53 compute-0 sudo[160722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:54 compute-0 python3.9[160724]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 01 13:19:54 compute-0 sudo[160722]: pam_unix(sudo:session): session closed for user root
Oct 01 13:19:55 compute-0 sudo[160901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsgkcmbailzsnhyfmalhrxnunahiylst ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759324794.787761-429-263544095637746/AnsiballZ_edpm_container_manage.py'
Oct 01 13:19:55 compute-0 sudo[160901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:19:55 compute-0 ceph-mon[74802]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:55 compute-0 python3[160903]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 13:19:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:19:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5538 writes, 23K keys, 5538 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5538 writes, 846 syncs, 6.55 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5538 writes, 23K keys, 5538 commit groups, 1.0 writes per commit group, ingest: 18.76 MB, 0.03 MB/s
                                           Interval WAL: 5538 writes, 846 syncs, 6.55 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 13:19:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:19:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:19:57 compute-0 ceph-mon[74802]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:59 compute-0 ceph-mon[74802]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:19:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:20:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6794 writes, 28K keys, 6794 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6794 writes, 1230 syncs, 5.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6794 writes, 28K keys, 6794 commit groups, 1.0 writes per commit group, ingest: 19.73 MB, 0.03 MB/s
                                           Interval WAL: 6794 writes, 1230 syncs, 5.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 13:20:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:01 compute-0 ceph-mon[74802]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:02 compute-0 sshd-session[160983]: Invalid user TestUser from 27.254.137.144 port 55958
Oct 01 13:20:02 compute-0 sshd-session[160983]: Received disconnect from 27.254.137.144 port 55958:11: Bye Bye [preauth]
Oct 01 13:20:02 compute-0 sshd-session[160983]: Disconnected from invalid user TestUser 27.254.137.144 port 55958 [preauth]
Oct 01 13:20:03 compute-0 ceph-mon[74802]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:04 compute-0 podman[161004]: 2025-10-01 13:20:04.074485969 +0000 UTC m=+0.629465622 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 01 13:20:04 compute-0 podman[160917]: 2025-10-01 13:20:04.41666456 +0000 UTC m=+8.808708080 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab
Oct 01 13:20:04 compute-0 podman[161070]: 2025-10-01 13:20:04.549599274 +0000 UTC m=+0.021697571 image pull aa21cc3d2531fe07b45a943d4ac1ba0268bfab26b0884a4a00fbad7695318ba9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab
Oct 01 13:20:05 compute-0 podman[161070]: 2025-10-01 13:20:05.138150761 +0000 UTC m=+0.610249058 container create dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 01 13:20:05 compute-0 python3[160903]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab
Oct 01 13:20:05 compute-0 sudo[160901]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:05 compute-0 ceph-mon[74802]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:05 compute-0 sudo[161258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cevefsfvqjicvatndbxlahhnerzuqspa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324805.4421074-437-150557414368795/AnsiballZ_stat.py'
Oct 01 13:20:05 compute-0 sudo[161258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:20:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5455 writes, 23K keys, 5455 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5455 writes, 785 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5455 writes, 23K keys, 5455 commit groups, 1.0 writes per commit group, ingest: 18.60 MB, 0.03 MB/s
                                           Interval WAL: 5455 writes, 785 syncs, 6.95 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb87090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb87090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb87090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 13:20:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:05 compute-0 python3.9[161260]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:20:06 compute-0 sudo[161258]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:06 compute-0 sudo[161412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xllvjrxjktrovziclbcisdenmdsifvrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324806.1991887-446-162890182910970/AnsiballZ_file.py'
Oct 01 13:20:06 compute-0 sudo[161412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:06 compute-0 python3.9[161414]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:06 compute-0 sudo[161412]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:06 compute-0 sudo[161488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nachiuuemglkyvdsnrfrlgkaarmyiadp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324806.1991887-446-162890182910970/AnsiballZ_stat.py'
Oct 01 13:20:06 compute-0 sudo[161488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:07 compute-0 python3.9[161490]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:20:07 compute-0 sudo[161488]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:07 compute-0 ceph-mon[74802]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:07 compute-0 sudo[161639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjkonkavliokcfrxzbxkyzppyahdbusz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324807.2111309-446-137959937942174/AnsiballZ_copy.py'
Oct 01 13:20:07 compute-0 sudo[161639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:07 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct 01 13:20:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:07 compute-0 python3.9[161641]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759324807.2111309-446-137959937942174/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:08 compute-0 sudo[161639]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:08 compute-0 sudo[161715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcsssygtputzwgdvsdiwfqiqxettrbrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324807.2111309-446-137959937942174/AnsiballZ_systemd.py'
Oct 01 13:20:08 compute-0 sudo[161715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:08 compute-0 python3.9[161717]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 13:20:08 compute-0 systemd[1]: Reloading.
Oct 01 13:20:08 compute-0 systemd-rc-local-generator[161745]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:20:08 compute-0 systemd-sysv-generator[161748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:20:08 compute-0 sudo[161715]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:09 compute-0 sudo[161826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbvtcmcisoxgxggkshbcrmyokslsakhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324807.2111309-446-137959937942174/AnsiballZ_systemd.py'
Oct 01 13:20:09 compute-0 sudo[161826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:09 compute-0 python3.9[161828]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:20:09 compute-0 systemd[1]: Reloading.
Oct 01 13:20:09 compute-0 ceph-mon[74802]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:09 compute-0 systemd-rc-local-generator[161858]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:20:09 compute-0 systemd-sysv-generator[161861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:20:09 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 01 13:20:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acb41f7114ab618d63698fd674156b117e354f2ee6c45c2ffe9ed7a83f99763/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acb41f7114ab618d63698fd674156b117e354f2ee6c45c2ffe9ed7a83f99763/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9.
Oct 01 13:20:10 compute-0 podman[161869]: 2025-10-01 13:20:10.01255315 +0000 UTC m=+0.185837647 container init dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: + sudo -E kolla_set_configs
Oct 01 13:20:10 compute-0 podman[161869]: 2025-10-01 13:20:10.05304507 +0000 UTC m=+0.226329577 container start dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:20:10 compute-0 edpm-start-podman-container[161869]: ovn_metadata_agent
Oct 01 13:20:10 compute-0 podman[161892]: 2025-10-01 13:20:10.14959121 +0000 UTC m=+0.075693327 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct 01 13:20:10 compute-0 edpm-start-podman-container[161868]: Creating additional drop-in dependency for "ovn_metadata_agent" (dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9)
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 13:20:10 compute-0 systemd[1]: Reloading.
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Validating config file
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Copying service configuration files
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Writing out command to execute
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: ++ cat /run_command
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: + CMD=neutron-ovn-metadata-agent
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: + ARGS=
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: + sudo kolla_copy_cacerts
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: + [[ ! -n '' ]]
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: + . kolla_extend_start
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: Running command: 'neutron-ovn-metadata-agent'
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: + umask 0022
Oct 01 13:20:10 compute-0 ovn_metadata_agent[161885]: + exec neutron-ovn-metadata-agent
Oct 01 13:20:10 compute-0 systemd-sysv-generator[161966]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:20:10 compute-0 systemd-rc-local-generator[161960]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:20:10 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 01 13:20:10 compute-0 sudo[161826]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:10 compute-0 sshd-session[153495]: Connection closed by 192.168.122.30 port 39068
Oct 01 13:20:10 compute-0 sshd-session[153492]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:20:10 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Oct 01 13:20:10 compute-0 systemd[1]: session-48.scope: Consumed 56.529s CPU time.
Oct 01 13:20:10 compute-0 systemd-logind[818]: Session 48 logged out. Waiting for processes to exit.
Oct 01 13:20:10 compute-0 systemd-logind[818]: Removed session 48.
Oct 01 13:20:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:11 compute-0 ceph-mon[74802]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:12 compute-0 sudo[161999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:12 compute-0 sudo[161999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:12 compute-0 sudo[161999]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:12 compute-0 sudo[162024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:20:12 compute-0 sudo[162024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:12 compute-0 sudo[162024]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:12 compute-0 sudo[162049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:12 compute-0 sudo[162049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.249 161890 INFO neutron.common.config [-] Logging enabled!
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.249 161890 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 01 13:20:12 compute-0 sudo[162049]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.249 161890 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.250 161890 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.251 161890 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.252 161890 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.253 161890 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.254 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.255 161890 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.256 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.257 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.258 161890 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.259 161890 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.260 161890 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.261 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.262 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.263 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.264 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.265 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.266 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.267 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.268 161890 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.269 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.270 161890 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.271 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.272 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.273 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.274 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.275 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.276 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.277 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.278 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.279 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.280 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.281 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.282 161890 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.283 161890 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.291 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.291 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.291 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.292 161890 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.292 161890 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.304 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 7280030e-2ba6-406c-9fae-f8284a927c47 (UUID: 7280030e-2ba6-406c-9fae-f8284a927c47) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 01 13:20:12 compute-0 sudo[162074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 13:20:12 compute-0 sudo[162074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.331 161890 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.331 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.331 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.331 161890 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.344 161890 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.355 161890 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.398 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '7280030e-2ba6-406c-9fae-f8284a927c47'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f240fd97850>], external_ids={}, name=7280030e-2ba6-406c-9fae-f8284a927c47, nb_cfg_timestamp=1759324751490, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.399 161890 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f240fd3f310>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.400 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.400 161890 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.401 161890 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.401 161890 INFO oslo_service.service [-] Starting 1 workers
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.405 161890 DEBUG oslo_service.service [-] Started child 162099 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.408 161890 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpoikdb4t9/privsep.sock']
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.410 162099 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-166439'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.446 162099 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.447 162099 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.447 162099 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.452 162099 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.461 162099 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 01 13:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.469 162099 INFO eventlet.wsgi.server [-] (162099) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 01 13:20:12 compute-0 sudo[162074]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:20:12 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:20:12 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:12 compute-0 sudo[162124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:12 compute-0 sudo[162124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:12 compute-0 sudo[162124]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:12 compute-0 sudo[162150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:20:12 compute-0 sudo[162150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:12 compute-0 sudo[162150]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:12 compute-0 sudo[162175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:12 compute-0 sudo[162175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:12 compute-0 sudo[162175]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:12 compute-0 sudo[162200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:20:12 compute-0 sudo[162200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:12 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.092 161890 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.093 161890 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpoikdb4t9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.974 162238 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.978 162238 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.980 162238 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:12.981 162238 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162238
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.096 162238 DEBUG oslo.privsep.daemon [-] privsep: reply[c26d54ba-75d6-4be4-bcf0-79595e75c21e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:20:13 compute-0 sudo[162200]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 13:20:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:20:13 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:20:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:20:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:13 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3b98bf2d-d8d8-4704-91cc-6987f33f3114 does not exist
Oct 01 13:20:13 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 32f51114-f71f-486d-8271-3656aa9ab662 does not exist
Oct 01 13:20:13 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8acad495-b47b-4425-8211-e252f93d1248 does not exist
Oct 01 13:20:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:20:13 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:20:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:20:13 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:20:13 compute-0 sudo[162262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:13 compute-0 sudo[162262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:13 compute-0 sudo[162262]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:13 compute-0 sudo[162287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:20:13 compute-0 sudo[162287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:13 compute-0 sudo[162287]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:13 compute-0 sudo[162312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:13 compute-0 sudo[162312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:13 compute-0 sudo[162312]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.550 162238 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.551 162238 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:20:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:13.551 162238 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:20:13 compute-0 ceph-mon[74802]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:20:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:20:13 compute-0 sudo[162337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:20:13 compute-0 sudo[162337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:13 compute-0 podman[162403]: 2025-10-01 13:20:13.944293212 +0000 UTC m=+0.049414016 container create 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:20:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:13 compute-0 systemd[1]: Started libpod-conmon-7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21.scope.
Oct 01 13:20:14 compute-0 podman[162403]: 2025-10-01 13:20:13.917236267 +0000 UTC m=+0.022357111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:20:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:20:14 compute-0 podman[162403]: 2025-10-01 13:20:14.032944418 +0000 UTC m=+0.138065382 container init 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.044 162238 DEBUG oslo.privsep.daemon [-] privsep: reply[d681bf7d-4f9d-43de-a8c8-0b9cfdd65350]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.047 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, column=external_ids, values=({'neutron:ovn-metadata-id': 'dd134fee-c268-55e9-81d6-d964cb333c5f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:20:14 compute-0 podman[162403]: 2025-10-01 13:20:14.047618611 +0000 UTC m=+0.152739385 container start 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 13:20:14 compute-0 podman[162403]: 2025-10-01 13:20:14.052139371 +0000 UTC m=+0.157260185 container attach 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:20:14 compute-0 happy_swirles[162419]: 167 167
Oct 01 13:20:14 compute-0 systemd[1]: libpod-7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21.scope: Deactivated successfully.
Oct 01 13:20:14 compute-0 podman[162403]: 2025-10-01 13:20:14.056026731 +0000 UTC m=+0.161147495 container died 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.057 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.065 161890 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.066 161890 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.067 161890 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.068 161890 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.069 161890 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.070 161890 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.071 161890 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.072 161890 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.073 161890 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.074 161890 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.075 161890 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.076 161890 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.077 161890 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.078 161890 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.079 161890 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.080 161890 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.081 161890 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.082 161890 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.083 161890 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.084 161890 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.085 161890 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.086 161890 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.087 161890 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.088 161890 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.089 161890 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.090 161890 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.091 161890 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.092 161890 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.093 161890 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.094 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.095 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.096 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:20:14 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:20:14.097 161890 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 01 13:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0527201647add611fc4b9dac242220a5b233c82e6bfca4cf6c42a17e8c8a7bf-merged.mount: Deactivated successfully.
Oct 01 13:20:14 compute-0 podman[162403]: 2025-10-01 13:20:14.116901019 +0000 UTC m=+0.222021783 container remove 7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swirles, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:20:14 compute-0 systemd[1]: libpod-conmon-7d600eeaf2430ed83c14c1198f8bcafd368e05eaf42ac771d367773815ca5a21.scope: Deactivated successfully.
Oct 01 13:20:14 compute-0 podman[162442]: 2025-10-01 13:20:14.327108318 +0000 UTC m=+0.066425261 container create 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:20:14 compute-0 systemd[1]: Started libpod-conmon-9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd.scope.
Oct 01 13:20:14 compute-0 podman[162442]: 2025-10-01 13:20:14.300785526 +0000 UTC m=+0.040102489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:20:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:14 compute-0 podman[162442]: 2025-10-01 13:20:14.442236602 +0000 UTC m=+0.181553625 container init 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:20:14 compute-0 podman[162442]: 2025-10-01 13:20:14.458861185 +0000 UTC m=+0.198178098 container start 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:20:14 compute-0 podman[162442]: 2025-10-01 13:20:14.462790287 +0000 UTC m=+0.202107250 container attach 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:20:15 compute-0 nifty_leavitt[162458]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:20:15 compute-0 nifty_leavitt[162458]: --> relative data size: 1.0
Oct 01 13:20:15 compute-0 nifty_leavitt[162458]: --> All data devices are unavailable
Oct 01 13:20:15 compute-0 ceph-mon[74802]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:15 compute-0 systemd[1]: libpod-9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd.scope: Deactivated successfully.
Oct 01 13:20:15 compute-0 systemd[1]: libpod-9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd.scope: Consumed 1.103s CPU time.
Oct 01 13:20:15 compute-0 podman[162442]: 2025-10-01 13:20:15.615045333 +0000 UTC m=+1.354362286 container died 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:20:15 compute-0 sshd-session[162499]: Accepted publickey for zuul from 192.168.122.30 port 43770 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:20:15 compute-0 systemd-logind[818]: New session 49 of user zuul.
Oct 01 13:20:15 compute-0 systemd[1]: Started Session 49 of User zuul.
Oct 01 13:20:15 compute-0 sshd-session[162499]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:20:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b9158271b47321de54cc35531b0214fe4cd2c1b7941f7eb8c79e8e1628f8c44-merged.mount: Deactivated successfully.
Oct 01 13:20:16 compute-0 podman[162442]: 2025-10-01 13:20:16.5651563 +0000 UTC m=+2.304473213 container remove 9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 13:20:16 compute-0 systemd[1]: libpod-conmon-9c946d86c2decfa16d33cdf62e3fb1351877d233ab2e80b86a7f2e832dbc61fd.scope: Deactivated successfully.
Oct 01 13:20:16 compute-0 sudo[162337]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:16 compute-0 sudo[162653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:16 compute-0 sudo[162653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:16 compute-0 sudo[162653]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:16 compute-0 sudo[162678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:20:16 compute-0 sudo[162678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:16 compute-0 sudo[162678]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:16 compute-0 sudo[162703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:16 compute-0 sudo[162703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:16 compute-0 sudo[162703]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:16 compute-0 sudo[162728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:20:16 compute-0 sudo[162728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:16 compute-0 python3.9[162652]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:20:17 compute-0 podman[162809]: 2025-10-01 13:20:17.122214415 +0000 UTC m=+0.021122623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:20:17 compute-0 podman[162809]: 2025-10-01 13:20:17.302703646 +0000 UTC m=+0.201611834 container create 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:20:17 compute-0 systemd[1]: Started libpod-conmon-2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658.scope.
Oct 01 13:20:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:20:17 compute-0 podman[162809]: 2025-10-01 13:20:17.394598734 +0000 UTC m=+0.293506952 container init 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:20:17 compute-0 podman[162809]: 2025-10-01 13:20:17.405145003 +0000 UTC m=+0.304053191 container start 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:20:17 compute-0 heuristic_goodall[162889]: 167 167
Oct 01 13:20:17 compute-0 systemd[1]: libpod-2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658.scope: Deactivated successfully.
Oct 01 13:20:17 compute-0 conmon[162889]: conmon 2969e221d0b4507af23a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658.scope/container/memory.events
Oct 01 13:20:17 compute-0 podman[162809]: 2025-10-01 13:20:17.527618903 +0000 UTC m=+0.426527111 container attach 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:20:17 compute-0 podman[162809]: 2025-10-01 13:20:17.529508821 +0000 UTC m=+0.428417059 container died 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f14d0553f23a9f47728ad0cc47b9da734a8c3f57be8a80cf84adafc4bbbc4c1-merged.mount: Deactivated successfully.
Oct 01 13:20:17 compute-0 podman[162809]: 2025-10-01 13:20:17.658895378 +0000 UTC m=+0.557803606 container remove 2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goodall, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:20:17 compute-0 systemd[1]: libpod-conmon-2969e221d0b4507af23adddd07ce92e0523a769c95add8cad078eef0e73a0658.scope: Deactivated successfully.
Oct 01 13:20:17 compute-0 ceph-mon[74802]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:17 compute-0 sudo[162981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuiehmnsrmtkwrqsuszimtrzioqvtzfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324817.275363-34-228742910567170/AnsiballZ_command.py'
Oct 01 13:20:17 compute-0 sudo[162981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:20:17 compute-0 podman[162989]: 2025-10-01 13:20:17.853906897 +0000 UTC m=+0.048942262 container create 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:20:17 compute-0 systemd[1]: Started libpod-conmon-3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8.scope.
Oct 01 13:20:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:17 compute-0 podman[162989]: 2025-10-01 13:20:17.833129087 +0000 UTC m=+0.028164502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:20:17 compute-0 podman[162989]: 2025-10-01 13:20:17.940517855 +0000 UTC m=+0.135553230 container init 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:20:17 compute-0 python3.9[162983]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:20:17 compute-0 podman[162989]: 2025-10-01 13:20:17.948439323 +0000 UTC m=+0.143474688 container start 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 13:20:17 compute-0 podman[162989]: 2025-10-01 13:20:17.955840534 +0000 UTC m=+0.150875919 container attach 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:20:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:18 compute-0 sudo[162981]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]: {
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:     "0": [
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:         {
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "devices": [
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "/dev/loop3"
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             ],
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_name": "ceph_lv0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_size": "21470642176",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "name": "ceph_lv0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "tags": {
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.cluster_name": "ceph",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.crush_device_class": "",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.encrypted": "0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.osd_id": "0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.type": "block",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.vdo": "0"
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             },
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "type": "block",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "vg_name": "ceph_vg0"
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:         }
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:     ],
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:     "1": [
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:         {
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "devices": [
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "/dev/loop4"
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             ],
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_name": "ceph_lv1",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_size": "21470642176",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "name": "ceph_lv1",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "tags": {
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.cluster_name": "ceph",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.crush_device_class": "",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.encrypted": "0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.osd_id": "1",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.type": "block",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.vdo": "0"
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             },
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "type": "block",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "vg_name": "ceph_vg1"
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:         }
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:     ],
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:     "2": [
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:         {
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "devices": [
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "/dev/loop5"
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             ],
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_name": "ceph_lv2",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_size": "21470642176",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "name": "ceph_lv2",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "tags": {
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.cluster_name": "ceph",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.crush_device_class": "",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.encrypted": "0",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.osd_id": "2",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.type": "block",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:                 "ceph.vdo": "0"
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             },
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "type": "block",
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:             "vg_name": "ceph_vg2"
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:         }
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]:     ]
Oct 01 13:20:18 compute-0 intelligent_jackson[163006]: }
Oct 01 13:20:18 compute-0 systemd[1]: libpod-3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8.scope: Deactivated successfully.
Oct 01 13:20:18 compute-0 conmon[163006]: conmon 3baa4eea6fc5fc372dbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8.scope/container/memory.events
Oct 01 13:20:18 compute-0 podman[162989]: 2025-10-01 13:20:18.679794803 +0000 UTC m=+0.874830198 container died 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 13:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b191f2f60b58e3ba78bd5891684c81d437d8f409121ba7d5a0e5b9d1bb80146-merged.mount: Deactivated successfully.
Oct 01 13:20:18 compute-0 podman[162989]: 2025-10-01 13:20:18.734235455 +0000 UTC m=+0.929270820 container remove 3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:20:18 compute-0 systemd[1]: libpod-conmon-3baa4eea6fc5fc372dbdfc0a7ba3e8383c0fdf3a303d01988174173d5cb669c8.scope: Deactivated successfully.
Oct 01 13:20:18 compute-0 sudo[162728]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:18 compute-0 sudo[163162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:18 compute-0 sudo[163162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:18 compute-0 sudo[163162]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:18 compute-0 sudo[163212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmkodjoruhxsytjjzllumogtqnywkggt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324818.280049-45-184436869019002/AnsiballZ_systemd_service.py'
Oct 01 13:20:18 compute-0 sudo[163212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:18 compute-0 sudo[163216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:20:18 compute-0 sudo[163216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:18 compute-0 sudo[163216]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:19 compute-0 sudo[163241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:19 compute-0 sudo[163241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:19 compute-0 sudo[163241]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:19 compute-0 sudo[163266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:20:19 compute-0 sudo[163266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:19 compute-0 python3.9[163215]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 13:20:19 compute-0 systemd[1]: Reloading.
Oct 01 13:20:19 compute-0 systemd-sysv-generator[163343]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:20:19 compute-0 systemd-rc-local-generator[163340]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:20:19 compute-0 podman[163363]: 2025-10-01 13:20:19.465210004 +0000 UTC m=+0.055800236 container create 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 13:20:19 compute-0 sudo[163212]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:19 compute-0 systemd[1]: Started libpod-conmon-305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614.scope.
Oct 01 13:20:19 compute-0 podman[163363]: 2025-10-01 13:20:19.433370207 +0000 UTC m=+0.023960529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:20:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:20:19 compute-0 podman[163363]: 2025-10-01 13:20:19.587241129 +0000 UTC m=+0.177831411 container init 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 13:20:19 compute-0 podman[163363]: 2025-10-01 13:20:19.598941615 +0000 UTC m=+0.189531847 container start 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:20:19 compute-0 podman[163363]: 2025-10-01 13:20:19.603145497 +0000 UTC m=+0.193735779 container attach 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:20:19 compute-0 busy_austin[163379]: 167 167
Oct 01 13:20:19 compute-0 systemd[1]: libpod-305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614.scope: Deactivated successfully.
Oct 01 13:20:19 compute-0 podman[163363]: 2025-10-01 13:20:19.607324647 +0000 UTC m=+0.197914879 container died 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:20:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1e2e583f055cdb6ee566e0e441e10ef2ee68fb97e6e49d2c6c44f92f823b073-merged.mount: Deactivated successfully.
Oct 01 13:20:19 compute-0 podman[163363]: 2025-10-01 13:20:19.642793757 +0000 UTC m=+0.233383989 container remove 305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:20:19 compute-0 systemd[1]: libpod-conmon-305bd308fc31528c3647e46d04d8bc5930abbe817149ec40359337572ed4a614.scope: Deactivated successfully.
Oct 01 13:20:19 compute-0 ceph-mon[74802]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:19 compute-0 podman[163463]: 2025-10-01 13:20:19.83926571 +0000 UTC m=+0.068736490 container create 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:20:19 compute-0 systemd[1]: Started libpod-conmon-1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45.scope.
Oct 01 13:20:19 compute-0 podman[163463]: 2025-10-01 13:20:19.813545307 +0000 UTC m=+0.043016157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:20:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:20:19 compute-0 podman[163463]: 2025-10-01 13:20:19.953193794 +0000 UTC m=+0.182664604 container init 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:20:19 compute-0 podman[163463]: 2025-10-01 13:20:19.967722547 +0000 UTC m=+0.197193327 container start 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:20:19 compute-0 podman[163463]: 2025-10-01 13:20:19.971798505 +0000 UTC m=+0.201269305 container attach 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 13:20:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:20 compute-0 python3.9[163573]: ansible-ansible.builtin.service_facts Invoked
Oct 01 13:20:20 compute-0 network[163590]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:20:20 compute-0 network[163591]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:20:20 compute-0 network[163592]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:20:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]: {
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "osd_id": 0,
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "type": "bluestore"
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:     },
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "osd_id": 2,
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "type": "bluestore"
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:     },
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "osd_id": 1,
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:         "type": "bluestore"
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]:     }
Oct 01 13:20:21 compute-0 priceless_zhukovsky[163495]: }
Oct 01 13:20:21 compute-0 podman[163463]: 2025-10-01 13:20:21.069601245 +0000 UTC m=+1.299072025 container died 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:20:21 compute-0 systemd[1]: libpod-1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45.scope: Deactivated successfully.
Oct 01 13:20:21 compute-0 systemd[1]: libpod-1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45.scope: Consumed 1.103s CPU time.
Oct 01 13:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab250002346a77ae0883ac0ecd9e82f186b08a42d7c44a83fd20bb0b102fc7b2-merged.mount: Deactivated successfully.
Oct 01 13:20:21 compute-0 podman[163463]: 2025-10-01 13:20:21.258023417 +0000 UTC m=+1.487494197 container remove 1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 13:20:21 compute-0 systemd[1]: libpod-conmon-1b62ea3a5c2c358cb4b3f134d5caba510bdde9f0920a265097cb175dae874b45.scope: Deactivated successfully.
Oct 01 13:20:21 compute-0 sudo[163266]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:20:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:20:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:21 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f5ac27ee-6119-49e7-9f4e-a2f5654476ef does not exist
Oct 01 13:20:21 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c3e16c15-5bdc-4ca7-ad9f-13afb20689df does not exist
Oct 01 13:20:21 compute-0 sudo[163645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:20:21 compute-0 sudo[163645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:21 compute-0 sudo[163645]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:21 compute-0 sudo[163670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:20:21 compute-0 sudo[163670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:20:21 compute-0 sudo[163670]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:21 compute-0 ceph-mon[74802]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:20:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:23 compute-0 ceph-mon[74802]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:25 compute-0 sudo[163947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzovfeorczcdproebspquxdfjrjczsge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324825.0679085-64-240248048732706/AnsiballZ_systemd_service.py'
Oct 01 13:20:25 compute-0 sudo[163947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:25 compute-0 python3.9[163949]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:20:25 compute-0 sudo[163947]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:25 compute-0 ceph-mon[74802]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:26 compute-0 sudo[164100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luuqluwsxfdvpyttvpdcchgpefosxdrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324825.8308547-64-59181573452295/AnsiballZ_systemd_service.py'
Oct 01 13:20:26 compute-0 sudo[164100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:26 compute-0 python3.9[164102]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:20:26 compute-0 sudo[164100]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:26 compute-0 sudo[164253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxtbnrpgzsrezgwmwxxwbbzylzfhzhpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324826.5657916-64-68296501797157/AnsiballZ_systemd_service.py'
Oct 01 13:20:26 compute-0 sudo[164253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:27 compute-0 python3.9[164255]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:20:27 compute-0 sudo[164253]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:27 compute-0 sudo[164406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyktfyiqweivkmvxjiwqmzwgrjuhqtly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324827.4366426-64-141690662173443/AnsiballZ_systemd_service.py'
Oct 01 13:20:27 compute-0 sudo[164406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:27 compute-0 ceph-mon[74802]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:28 compute-0 python3.9[164408]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:20:28 compute-0 sudo[164406]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:28 compute-0 sudo[164559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezruaxchhciwbazalrxafmqvnzeitezx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324828.2012386-64-223589248526923/AnsiballZ_systemd_service.py'
Oct 01 13:20:28 compute-0 sudo[164559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:28 compute-0 python3.9[164561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:20:28 compute-0 sudo[164559]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:29 compute-0 sudo[164712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxrmzubbuztenifvtmhgczuvexpglnyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324829.052342-64-279831860805062/AnsiballZ_systemd_service.py'
Oct 01 13:20:29 compute-0 sudo[164712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:29 compute-0 python3.9[164714]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:20:29 compute-0 sudo[164712]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:29 compute-0 ceph-mon[74802]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:30 compute-0 sudo[164865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipnirxetnmczsnrwrtlqpglbtllrivxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324829.8211052-64-74190336040949/AnsiballZ_systemd_service.py'
Oct 01 13:20:30 compute-0 sudo[164865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:30 compute-0 python3.9[164867]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:20:30 compute-0 sudo[164865]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:31 compute-0 sudo[165018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udqrmyvjkdlognapeicfepilmpxxpctf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324830.7155924-116-49116042820906/AnsiballZ_file.py'
Oct 01 13:20:31 compute-0 sudo[165018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:31 compute-0 python3.9[165020]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:31 compute-0 sudo[165018]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:31 compute-0 ceph-mon[74802]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:31 compute-0 sudo[165170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxpilyllfvvbdngrmonwynckaxpgufua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324831.610711-116-151213194016406/AnsiballZ_file.py'
Oct 01 13:20:31 compute-0 sudo[165170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:32 compute-0 python3.9[165172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:32 compute-0 sudo[165170]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:32 compute-0 sudo[165322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyxzupiyjxpjlpocmosgpxwiwzzebbpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324832.2137837-116-222829382732469/AnsiballZ_file.py'
Oct 01 13:20:32 compute-0 sudo[165322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:32 compute-0 sshd[1010]: Timeout before authentication for connection from 14.103.127.7 to 38.102.83.245, pid = 147821
Oct 01 13:20:32 compute-0 python3.9[165324]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:32 compute-0 sudo[165322]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:33 compute-0 sudo[165474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbkvwzxayjhqrjkssgmvlprbmfbvrztg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324832.8843536-116-94958493961514/AnsiballZ_file.py'
Oct 01 13:20:33 compute-0 sudo[165474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:33 compute-0 python3.9[165476]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:33 compute-0 sudo[165474]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:33 compute-0 ceph-mon[74802]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:33 compute-0 sudo[165626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hayysrbsqxmnafqmgjptktgxunbnmhpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324833.5536163-116-47661949084678/AnsiballZ_file.py'
Oct 01 13:20:33 compute-0 sudo[165626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:33 compute-0 python3.9[165628]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:34 compute-0 sudo[165626]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:34 compute-0 sudo[165788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfeiseqrasyxvykkatfrrdzccxxfznmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324834.1302164-116-236923063714429/AnsiballZ_file.py'
Oct 01 13:20:34 compute-0 sudo[165788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:34 compute-0 podman[165752]: 2025-10-01 13:20:34.491184746 +0000 UTC m=+0.106241944 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:20:34 compute-0 python3.9[165798]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:34 compute-0 sudo[165788]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:34 compute-0 ceph-mon[74802]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:35 compute-0 sudo[165958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmyntmvzntlgwmkpetrpjueyulyyfbow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324834.8119514-116-36662905617329/AnsiballZ_file.py'
Oct 01 13:20:35 compute-0 sudo[165958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:35 compute-0 python3.9[165960]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:35 compute-0 sudo[165958]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:35 compute-0 sudo[166110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ostflzujqmcpmhcevdljfywpzhqttqcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324835.396037-166-124691298209088/AnsiballZ_file.py'
Oct 01 13:20:35 compute-0 sudo[166110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:35 compute-0 python3.9[166112]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:35 compute-0 sudo[166110]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:36 compute-0 sudo[166262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvnascfznmefrfcvyezjtvwqfezhjokh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324836.0116382-166-177131080791521/AnsiballZ_file.py'
Oct 01 13:20:36 compute-0 sudo[166262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:36 compute-0 python3.9[166264]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:36 compute-0 sudo[166262]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:36 compute-0 sudo[166414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcsbbhwdkohtwdtieermdwfqexvxsylt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324836.7146585-166-268444322493848/AnsiballZ_file.py'
Oct 01 13:20:36 compute-0 sudo[166414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:37 compute-0 ceph-mon[74802]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:37 compute-0 python3.9[166416]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:37 compute-0 sudo[166414]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:37 compute-0 sudo[166566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahunkysmhjdkfpycvcfypmyecgpraohb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324837.4020083-166-11645393348147/AnsiballZ_file.py'
Oct 01 13:20:37 compute-0 sudo[166566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:37 compute-0 python3.9[166568]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:37 compute-0 sudo[166566]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:38 compute-0 sudo[166718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpettirlazkqetxwbkdggmzdohzkslsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324838.072393-166-197658897163161/AnsiballZ_file.py'
Oct 01 13:20:38 compute-0 sudo[166718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:38 compute-0 python3.9[166720]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:38 compute-0 sudo[166718]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:38 compute-0 sudo[166870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrdnpewclqofcecpdatsgegxasflzjjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324838.6526182-166-220839254896576/AnsiballZ_file.py'
Oct 01 13:20:38 compute-0 sudo[166870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:39 compute-0 ceph-mon[74802]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:39 compute-0 python3.9[166872]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:39 compute-0 sudo[166870]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:39 compute-0 sudo[167022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoacwmgfokjvzsnjslviywkbrnlplvvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324839.2797248-166-206728088596452/AnsiballZ_file.py'
Oct 01 13:20:39 compute-0 sudo[167022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:39 compute-0 python3.9[167024]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:20:39 compute-0 sudo[167022]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:40 compute-0 sudo[167174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzqafvydcjqzhysvllzofyiuiwbxjgwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324839.942559-217-9296107664239/AnsiballZ_command.py'
Oct 01 13:20:40 compute-0 sudo[167174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:40 compute-0 podman[167176]: 2025-10-01 13:20:40.310151351 +0000 UTC m=+0.066797060 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 13:20:40 compute-0 python3.9[167177]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:20:40 compute-0 sudo[167174]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:41 compute-0 ceph-mon[74802]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:41 compute-0 python3.9[167350]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 13:20:41 compute-0 sudo[167500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuyvhnwcwveolfqdqvzivmszprrrplbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324841.455092-235-69855427805490/AnsiballZ_systemd_service.py'
Oct 01 13:20:41 compute-0 sudo[167500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:42 compute-0 python3.9[167502]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 13:20:42 compute-0 systemd[1]: Reloading.
Oct 01 13:20:42 compute-0 systemd-rc-local-generator[167524]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:20:42 compute-0 systemd-sysv-generator[167531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:20:42 compute-0 sudo[167500]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:42 compute-0 sudo[167690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgyxqquwcmtphffuaxeyiqzwrhozwfvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324842.4922233-243-278302656914631/AnsiballZ_command.py'
Oct 01 13:20:42 compute-0 sudo[167690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:42 compute-0 python3.9[167692]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:20:42 compute-0 sudo[167690]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:43 compute-0 ceph-mon[74802]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:43 compute-0 sshd-session[167606]: Invalid user administrator from 80.253.31.232 port 51012
Oct 01 13:20:43 compute-0 sudo[167843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzvyehrodandurftkfhtcbqeaarbrynh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324843.0955331-243-122132039353969/AnsiballZ_command.py'
Oct 01 13:20:43 compute-0 sudo[167843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:43 compute-0 sshd-session[167606]: Received disconnect from 80.253.31.232 port 51012:11: Bye Bye [preauth]
Oct 01 13:20:43 compute-0 sshd-session[167606]: Disconnected from invalid user administrator 80.253.31.232 port 51012 [preauth]
Oct 01 13:20:43 compute-0 python3.9[167845]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:20:43 compute-0 sudo[167843]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:44 compute-0 sudo[167996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qstzmbdnlffgxatrvhsdqnxhgtuumiwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324843.6896276-243-87448148431187/AnsiballZ_command.py'
Oct 01 13:20:44 compute-0 sudo[167996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:44 compute-0 python3.9[167998]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:20:44 compute-0 sudo[167996]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:44 compute-0 sudo[168151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-letbucsgjoytqjvnqejopywlmhobgnww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324844.3937957-243-280405795309530/AnsiballZ_command.py'
Oct 01 13:20:44 compute-0 sudo[168151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:44 compute-0 python3.9[168153]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:20:44 compute-0 sudo[168151]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:45 compute-0 sshd-session[168100]: Invalid user devuser from 156.236.31.46 port 44600
Oct 01 13:20:45 compute-0 sshd-session[168100]: Received disconnect from 156.236.31.46 port 44600:11: Bye Bye [preauth]
Oct 01 13:20:45 compute-0 sshd-session[168100]: Disconnected from invalid user devuser 156.236.31.46 port 44600 [preauth]
Oct 01 13:20:45 compute-0 ceph-mon[74802]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:45 compute-0 sudo[168304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebeuoilnidcpdhmsdayjnshnxshadwnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324845.106993-243-25396346116213/AnsiballZ_command.py'
Oct 01 13:20:45 compute-0 sudo[168304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:45 compute-0 python3.9[168306]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:20:45 compute-0 sudo[168304]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:46 compute-0 sudo[168457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwpodlidxlvybkxojzwvndispytbxdat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324845.82736-243-49237692270430/AnsiballZ_command.py'
Oct 01 13:20:46 compute-0 sudo[168457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:46 compute-0 python3.9[168459]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:20:46 compute-0 sudo[168457]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:46 compute-0 sudo[168610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haursucuygvxvzwcafqscmeveylqbjnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324846.5469818-243-65989817016970/AnsiballZ_command.py'
Oct 01 13:20:46 compute-0 sudo[168610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:47 compute-0 python3.9[168612]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:20:47 compute-0 sudo[168610]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:47 compute-0 ceph-mon[74802]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:20:47
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr']
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:20:47 compute-0 sudo[168763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnasxohvkxcykaksqgicigjbramvblst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324847.4041011-297-184451249951674/AnsiballZ_getent.py'
Oct 01 13:20:47 compute-0 sudo[168763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:20:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:48 compute-0 python3.9[168765]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 01 13:20:48 compute-0 sudo[168763]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:48 compute-0 sudo[168916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmuhczbvkojqrrvxoeifnkelglwejxkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324848.242231-305-56111422866088/AnsiballZ_group.py'
Oct 01 13:20:48 compute-0 sudo[168916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:48 compute-0 python3.9[168918]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 13:20:49 compute-0 groupadd[168919]: group added to /etc/group: name=libvirt, GID=42473
Oct 01 13:20:49 compute-0 groupadd[168919]: group added to /etc/gshadow: name=libvirt
Oct 01 13:20:49 compute-0 groupadd[168919]: new group: name=libvirt, GID=42473
Oct 01 13:20:49 compute-0 sudo[168916]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:49 compute-0 ceph-mon[74802]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:50 compute-0 sudo[169074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuysflwkneiqxdfdgphxmhnszrcbevjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324849.483047-313-151717201929342/AnsiballZ_user.py'
Oct 01 13:20:50 compute-0 sudo[169074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:50 compute-0 python3.9[169076]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 01 13:20:50 compute-0 useradd[169078]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 01 13:20:50 compute-0 sudo[169074]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:51 compute-0 sudo[169234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlzznnwwyhvasukmyuteulznncwsbbjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324850.8152995-324-273759889522126/AnsiballZ_setup.py'
Oct 01 13:20:51 compute-0 sudo[169234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:51 compute-0 python3.9[169236]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:20:51 compute-0 sudo[169234]: pam_unix(sudo:session): session closed for user root
Oct 01 13:20:51 compute-0 ceph-mon[74802]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:52 compute-0 sudo[169318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbtaaeuwuwmpfcdtpvprsstxommhijqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324850.8152995-324-273759889522126/AnsiballZ_dnf.py'
Oct 01 13:20:52 compute-0 sudo[169318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:20:52 compute-0 python3.9[169320]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:20:53 compute-0 ceph-mon[74802]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:55 compute-0 ceph-mon[74802]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:20:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:20:57 compute-0 ceph-mon[74802]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:59 compute-0 ceph-mon[74802]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:20:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:00 compute-0 sshd-session[169332]: Invalid user seekcy from 200.7.101.139 port 51736
Oct 01 13:21:00 compute-0 sshd-session[169332]: Received disconnect from 200.7.101.139 port 51736:11: Bye Bye [preauth]
Oct 01 13:21:00 compute-0 sshd-session[169332]: Disconnected from invalid user seekcy 200.7.101.139 port 51736 [preauth]
Oct 01 13:21:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:01 compute-0 ceph-mon[74802]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:03 compute-0 ceph-mon[74802]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:05 compute-0 ceph-mon[74802]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:05 compute-0 podman[169456]: 2025-10-01 13:21:05.58516991 +0000 UTC m=+0.137162651 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS)
Oct 01 13:21:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:07 compute-0 ceph-mon[74802]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 01 13:21:09 compute-0 ceph-mon[74802]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 01 13:21:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 01 13:21:10 compute-0 podman[169533]: 2025-10-01 13:21:10.509450918 +0000 UTC m=+0.063379193 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct 01 13:21:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:11 compute-0 ceph-mon[74802]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 01 13:21:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct 01 13:21:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:21:12.284 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:21:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:21:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:21:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:21:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:21:13 compute-0 ceph-mon[74802]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct 01 13:21:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 13:21:15 compute-0 ceph-mon[74802]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 13:21:15 compute-0 sshd-session[169552]: Invalid user seekcy from 27.254.137.144 port 51496
Oct 01 13:21:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 13:21:16 compute-0 sshd-session[169552]: Received disconnect from 27.254.137.144 port 51496:11: Bye Bye [preauth]
Oct 01 13:21:16 compute-0 sshd-session[169552]: Disconnected from invalid user seekcy 27.254.137.144 port 51496 [preauth]
Oct 01 13:21:17 compute-0 ceph-mon[74802]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 13:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:21:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:21:19 compute-0 ceph-mon[74802]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:21:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct 01 13:21:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:21 compute-0 ceph-mon[74802]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct 01 13:21:21 compute-0 sudo[169560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:21 compute-0 sudo[169560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:21 compute-0 sudo[169560]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:21 compute-0 sudo[169585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:21:21 compute-0 sudo[169585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:21 compute-0 sudo[169585]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:21 compute-0 sudo[169610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:21 compute-0 sudo[169610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:21 compute-0 sudo[169610]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:21 compute-0 sudo[169635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:21:21 compute-0 sudo[169635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct 01 13:21:22 compute-0 podman[169730]: 2025-10-01 13:21:22.744887316 +0000 UTC m=+0.466658492 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:21:23 compute-0 podman[169730]: 2025-10-01 13:21:23.146234644 +0000 UTC m=+0.868005770 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 13:21:23 compute-0 ceph-mon[74802]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct 01 13:21:23 compute-0 sudo[169635]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:21:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:21:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:23 compute-0 sudo[169885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:23 compute-0 sudo[169885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:23 compute-0 sudo[169885]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Oct 01 13:21:24 compute-0 sudo[169910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:21:24 compute-0 sudo[169910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:24 compute-0 sudo[169910]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:24 compute-0 sudo[169935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:24 compute-0 sudo[169935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:24 compute-0 sudo[169935]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:24 compute-0 sudo[169960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:21:24 compute-0 sudo[169960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:24 compute-0 sudo[169960]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:21:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:21:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:21:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:21:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:21:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:24 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6ce6cd43-fe6b-412f-ad00-3e0a0afe6fe5 does not exist
Oct 01 13:21:24 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 33e59683-38e4-46c3-aab5-f05e03cd004c does not exist
Oct 01 13:21:24 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 0e499d79-b566-498d-b66b-45c0a8f461e0 does not exist
Oct 01 13:21:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:21:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:21:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:21:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:21:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:21:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:21:24 compute-0 sudo[170016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:24 compute-0 sudo[170016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:24 compute-0 sudo[170016]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:24 compute-0 sudo[170041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:21:24 compute-0 sudo[170041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:24 compute-0 ceph-mon[74802]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Oct 01 13:21:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:21:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:21:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:21:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:21:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:21:24 compute-0 sudo[170041]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:24 compute-0 sudo[170066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:24 compute-0 sudo[170066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:24 compute-0 sudo[170066]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:25 compute-0 sudo[170091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:21:25 compute-0 sudo[170091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:25 compute-0 podman[170156]: 2025-10-01 13:21:25.371578735 +0000 UTC m=+0.037656969 container create ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:21:25 compute-0 systemd[1]: Started libpod-conmon-ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2.scope.
Oct 01 13:21:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:21:25 compute-0 podman[170156]: 2025-10-01 13:21:25.354651659 +0000 UTC m=+0.020729923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:21:25 compute-0 podman[170156]: 2025-10-01 13:21:25.457753854 +0000 UTC m=+0.123832098 container init ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:21:25 compute-0 podman[170156]: 2025-10-01 13:21:25.46387765 +0000 UTC m=+0.129955894 container start ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 13:21:25 compute-0 podman[170156]: 2025-10-01 13:21:25.469336756 +0000 UTC m=+0.135415020 container attach ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:21:25 compute-0 zen_allen[170173]: 167 167
Oct 01 13:21:25 compute-0 podman[170156]: 2025-10-01 13:21:25.472155212 +0000 UTC m=+0.138233446 container died ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:21:25 compute-0 systemd[1]: libpod-ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2.scope: Deactivated successfully.
Oct 01 13:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8349ccc87ce1421157b481f70852ed33219650095a40f5ba3763336901207dc-merged.mount: Deactivated successfully.
Oct 01 13:21:25 compute-0 podman[170156]: 2025-10-01 13:21:25.534905396 +0000 UTC m=+0.200983630 container remove ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:21:25 compute-0 systemd[1]: libpod-conmon-ff5f928845bada0e566ab63e2db2173d1009258044cb922e10efc754d5a86ee2.scope: Deactivated successfully.
Oct 01 13:21:25 compute-0 podman[170199]: 2025-10-01 13:21:25.744811997 +0000 UTC m=+0.064233950 container create 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:21:25 compute-0 podman[170199]: 2025-10-01 13:21:25.710577343 +0000 UTC m=+0.029999356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:21:25 compute-0 systemd[1]: Started libpod-conmon-0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473.scope.
Oct 01 13:21:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:25 compute-0 podman[170199]: 2025-10-01 13:21:25.90659184 +0000 UTC m=+0.226013803 container init 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:21:25 compute-0 podman[170199]: 2025-10-01 13:21:25.912685076 +0000 UTC m=+0.232106989 container start 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:21:25 compute-0 podman[170199]: 2025-10-01 13:21:25.916129992 +0000 UTC m=+0.235552005 container attach 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:21:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Oct 01 13:21:26 compute-0 nice_edison[170216]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:21:26 compute-0 nice_edison[170216]: --> relative data size: 1.0
Oct 01 13:21:26 compute-0 nice_edison[170216]: --> All data devices are unavailable
Oct 01 13:21:26 compute-0 systemd[1]: libpod-0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473.scope: Deactivated successfully.
Oct 01 13:21:26 compute-0 podman[170199]: 2025-10-01 13:21:26.930597858 +0000 UTC m=+1.250019781 container died 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9137cefe79cd583c7b0dfd6af639f70f8b3899f11101832dd3410de137376912-merged.mount: Deactivated successfully.
Oct 01 13:21:27 compute-0 podman[170199]: 2025-10-01 13:21:27.015564828 +0000 UTC m=+1.334986741 container remove 0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_edison, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:21:27 compute-0 systemd[1]: libpod-conmon-0e24f075da8e0ccaaffaa4e0c3df0aa2c5414137d0441314fb21e33d48c9f473.scope: Deactivated successfully.
Oct 01 13:21:27 compute-0 sudo[170091]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:27 compute-0 ceph-mon[74802]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Oct 01 13:21:27 compute-0 sudo[170255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:27 compute-0 sudo[170255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:27 compute-0 sudo[170255]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:27 compute-0 sudo[170280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:21:27 compute-0 sudo[170280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:27 compute-0 sudo[170280]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:27 compute-0 sudo[170305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:27 compute-0 sudo[170305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:27 compute-0 sudo[170305]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:27 compute-0 sudo[170330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:21:27 compute-0 sudo[170330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:27 compute-0 podman[170395]: 2025-10-01 13:21:27.579827716 +0000 UTC m=+0.023161968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:21:27 compute-0 podman[170395]: 2025-10-01 13:21:27.888011664 +0000 UTC m=+0.331345926 container create 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:21:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Oct 01 13:21:29 compute-0 systemd[1]: Started libpod-conmon-04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e.scope.
Oct 01 13:21:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:21:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:30 compute-0 podman[170395]: 2025-10-01 13:21:30.237008174 +0000 UTC m=+2.680342516 container init 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:21:30 compute-0 ceph-mon[74802]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Oct 01 13:21:30 compute-0 podman[170395]: 2025-10-01 13:21:30.249566097 +0000 UTC m=+2.692900369 container start 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:21:30 compute-0 sweet_bassi[170415]: 167 167
Oct 01 13:21:30 compute-0 systemd[1]: libpod-04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e.scope: Deactivated successfully.
Oct 01 13:21:30 compute-0 podman[170395]: 2025-10-01 13:21:30.261489671 +0000 UTC m=+2.704823913 container attach 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:21:30 compute-0 podman[170395]: 2025-10-01 13:21:30.263363098 +0000 UTC m=+2.706697360 container died 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:21:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-af2579c222205e7844a7f0094f1e60865499f2a0b2ea24ef16584fd9b0a4743d-merged.mount: Deactivated successfully.
Oct 01 13:21:30 compute-0 podman[170395]: 2025-10-01 13:21:30.36345695 +0000 UTC m=+2.806791212 container remove 04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 13:21:30 compute-0 systemd[1]: libpod-conmon-04504f47c4e1920064582d033c7dd6877da06e14c8f15e30e4a0c5e37d801b3e.scope: Deactivated successfully.
Oct 01 13:21:30 compute-0 kernel: SELinux:  Converting 2765 SID table entries...
Oct 01 13:21:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 13:21:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 13:21:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 13:21:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 13:21:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 13:21:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 13:21:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 13:21:30 compute-0 podman[170439]: 2025-10-01 13:21:30.58154239 +0000 UTC m=+0.058239976 container create 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 13:21:30 compute-0 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 01 13:21:30 compute-0 systemd[1]: Started libpod-conmon-7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91.scope.
Oct 01 13:21:30 compute-0 podman[170439]: 2025-10-01 13:21:30.554412233 +0000 UTC m=+0.031109829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:21:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:30 compute-0 podman[170439]: 2025-10-01 13:21:30.726480261 +0000 UTC m=+0.203177887 container init 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:21:30 compute-0 podman[170439]: 2025-10-01 13:21:30.735347741 +0000 UTC m=+0.212045327 container start 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:21:30 compute-0 podman[170439]: 2025-10-01 13:21:30.740248441 +0000 UTC m=+0.216946027 container attach 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:21:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:31 compute-0 ceph-mon[74802]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:31 compute-0 suspicious_newton[170457]: {
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:     "0": [
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:         {
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "devices": [
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "/dev/loop3"
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             ],
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_name": "ceph_lv0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_size": "21470642176",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "name": "ceph_lv0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "tags": {
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.cluster_name": "ceph",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.crush_device_class": "",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.encrypted": "0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.osd_id": "0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.type": "block",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.vdo": "0"
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             },
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "type": "block",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "vg_name": "ceph_vg0"
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:         }
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:     ],
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:     "1": [
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:         {
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "devices": [
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "/dev/loop4"
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             ],
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_name": "ceph_lv1",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_size": "21470642176",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "name": "ceph_lv1",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "tags": {
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.cluster_name": "ceph",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.crush_device_class": "",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.encrypted": "0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.osd_id": "1",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.type": "block",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.vdo": "0"
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             },
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "type": "block",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "vg_name": "ceph_vg1"
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:         }
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:     ],
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:     "2": [
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:         {
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "devices": [
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "/dev/loop5"
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             ],
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_name": "ceph_lv2",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_size": "21470642176",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "name": "ceph_lv2",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "tags": {
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.cluster_name": "ceph",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.crush_device_class": "",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.encrypted": "0",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.osd_id": "2",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.type": "block",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:                 "ceph.vdo": "0"
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             },
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "type": "block",
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:             "vg_name": "ceph_vg2"
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:         }
Oct 01 13:21:31 compute-0 suspicious_newton[170457]:     ]
Oct 01 13:21:31 compute-0 suspicious_newton[170457]: }
Oct 01 13:21:31 compute-0 systemd[1]: libpod-7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91.scope: Deactivated successfully.
Oct 01 13:21:31 compute-0 podman[170439]: 2025-10-01 13:21:31.472702226 +0000 UTC m=+0.949399822 container died 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2015e5495683541549e0fed9410071f02ccf90d2302be81c46c117a62260e570-merged.mount: Deactivated successfully.
Oct 01 13:21:31 compute-0 podman[170439]: 2025-10-01 13:21:31.531055046 +0000 UTC m=+1.007752622 container remove 7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_newton, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:21:31 compute-0 systemd[1]: libpod-conmon-7555e2123cd8f8b2a9b583cf80e2366f6ef641b4171cb518778c8c4d852c9a91.scope: Deactivated successfully.
Oct 01 13:21:31 compute-0 sudo[170330]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:31 compute-0 sudo[170478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:31 compute-0 sudo[170478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:31 compute-0 sudo[170478]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:31 compute-0 sudo[170503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:21:31 compute-0 sudo[170503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:31 compute-0 sudo[170503]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:31 compute-0 sudo[170528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:31 compute-0 sudo[170528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:31 compute-0 sudo[170528]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:31 compute-0 sudo[170553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:21:31 compute-0 sudo[170553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:32 compute-0 podman[170619]: 2025-10-01 13:21:32.184857343 +0000 UTC m=+0.046411437 container create 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:21:32 compute-0 systemd[1]: Started libpod-conmon-0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6.scope.
Oct 01 13:21:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:21:32 compute-0 podman[170619]: 2025-10-01 13:21:32.257952112 +0000 UTC m=+0.119506236 container init 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:21:32 compute-0 podman[170619]: 2025-10-01 13:21:32.167269647 +0000 UTC m=+0.028823761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:21:32 compute-0 podman[170619]: 2025-10-01 13:21:32.269223776 +0000 UTC m=+0.130777940 container start 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:21:32 compute-0 awesome_pasteur[170635]: 167 167
Oct 01 13:21:32 compute-0 podman[170619]: 2025-10-01 13:21:32.27394174 +0000 UTC m=+0.135495834 container attach 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:21:32 compute-0 systemd[1]: libpod-0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6.scope: Deactivated successfully.
Oct 01 13:21:32 compute-0 conmon[170635]: conmon 0ae273b94491c2280e9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6.scope/container/memory.events
Oct 01 13:21:32 compute-0 podman[170619]: 2025-10-01 13:21:32.277429666 +0000 UTC m=+0.138983800 container died 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 13:21:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7a1d9594200cfb2e69037b24d3540519cc1f089eb6a5d2374cea479ea6800d1-merged.mount: Deactivated successfully.
Oct 01 13:21:32 compute-0 podman[170619]: 2025-10-01 13:21:32.317781316 +0000 UTC m=+0.179335420 container remove 0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:21:32 compute-0 systemd[1]: libpod-conmon-0ae273b94491c2280e9c27792e216b65fabc1b672a162f71d8308fc29cf855f6.scope: Deactivated successfully.
Oct 01 13:21:32 compute-0 podman[170659]: 2025-10-01 13:21:32.533722131 +0000 UTC m=+0.071461789 container create cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:21:32 compute-0 podman[170659]: 2025-10-01 13:21:32.486101639 +0000 UTC m=+0.023841317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:21:32 compute-0 systemd[1]: Started libpod-conmon-cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b.scope.
Oct 01 13:21:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:21:32 compute-0 podman[170659]: 2025-10-01 13:21:32.647417958 +0000 UTC m=+0.185157676 container init cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:21:32 compute-0 podman[170659]: 2025-10-01 13:21:32.653163463 +0000 UTC m=+0.190903151 container start cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:21:32 compute-0 podman[170659]: 2025-10-01 13:21:32.659587199 +0000 UTC m=+0.197326917 container attach cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:21:33 compute-0 ceph-mon[74802]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:33 compute-0 fervent_poincare[170676]: {
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "osd_id": 0,
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "type": "bluestore"
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:     },
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "osd_id": 2,
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "type": "bluestore"
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:     },
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "osd_id": 1,
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:         "type": "bluestore"
Oct 01 13:21:33 compute-0 fervent_poincare[170676]:     }
Oct 01 13:21:33 compute-0 fervent_poincare[170676]: }
Oct 01 13:21:33 compute-0 systemd[1]: libpod-cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b.scope: Deactivated successfully.
Oct 01 13:21:33 compute-0 systemd[1]: libpod-cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b.scope: Consumed 1.038s CPU time.
Oct 01 13:21:33 compute-0 podman[170709]: 2025-10-01 13:21:33.727356461 +0000 UTC m=+0.023411915 container died cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 13:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4767c2c018d549a07afdd896580f4a9459aa359877e2e82651b8e01fd6a452c7-merged.mount: Deactivated successfully.
Oct 01 13:21:33 compute-0 podman[170709]: 2025-10-01 13:21:33.797856341 +0000 UTC m=+0.093911805 container remove cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:21:33 compute-0 systemd[1]: libpod-conmon-cb45f549933857b654d478a35783a1b3df8357865953cdea7ea24dfd143c1a8b.scope: Deactivated successfully.
Oct 01 13:21:33 compute-0 sudo[170553]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:21:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:21:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:33 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ca546af-3a9f-474a-aeb3-2cd533f639f9 does not exist
Oct 01 13:21:33 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f5b3ba6c-fb01-4cd1-bccb-b6789b3b0719 does not exist
Oct 01 13:21:33 compute-0 sudo[170724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:21:33 compute-0 sudo[170724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:33 compute-0 sudo[170724]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:33 compute-0 sudo[170749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:21:33 compute-0 sudo[170749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:21:34 compute-0 sudo[170749]: pam_unix(sudo:session): session closed for user root
Oct 01 13:21:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:21:34 compute-0 ceph-mon[74802]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:36 compute-0 podman[170774]: 2025-10-01 13:21:36.574779482 +0000 UTC m=+0.122609041 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 13:21:37 compute-0 ceph-mon[74802]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:39 compute-0 ceph-mon[74802]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:40 compute-0 ceph-mon[74802]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:41 compute-0 podman[170804]: 2025-10-01 13:21:41.522810729 +0000 UTC m=+0.077250257 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:21:41 compute-0 kernel: SELinux:  Converting 2765 SID table entries...
Oct 01 13:21:41 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 13:21:41 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 13:21:41 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 13:21:41 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 13:21:41 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 13:21:41 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 13:21:41 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 13:21:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:42 compute-0 sshd-session[170827]: Invalid user seekcy from 80.253.31.232 port 35450
Oct 01 13:21:42 compute-0 sshd-session[170827]: Received disconnect from 80.253.31.232 port 35450:11: Bye Bye [preauth]
Oct 01 13:21:42 compute-0 sshd-session[170827]: Disconnected from invalid user seekcy 80.253.31.232 port 35450 [preauth]
Oct 01 13:21:43 compute-0 ceph-mon[74802]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:45 compute-0 ceph-mon[74802]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:47 compute-0 ceph-mon[74802]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:21:47
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta']
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:21:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:21:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:49 compute-0 ceph-mon[74802]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:51 compute-0 sshd-session[170829]: Invalid user xiao from 156.236.31.46 port 44684
Oct 01 13:21:51 compute-0 ceph-mon[74802]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:51 compute-0 sshd-session[170829]: Received disconnect from 156.236.31.46 port 44684:11: Bye Bye [preauth]
Oct 01 13:21:51 compute-0 sshd-session[170829]: Disconnected from invalid user xiao 156.236.31.46 port 44684 [preauth]
Oct 01 13:21:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:53 compute-0 ceph-mon[74802]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:55 compute-0 ceph-mon[74802]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:21:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:21:57 compute-0 ceph-mon[74802]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:21:59 compute-0 ceph-mon[74802]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:01 compute-0 ceph-mon[74802]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:03 compute-0 ceph-mon[74802]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:05 compute-0 ceph-mon[74802]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:07 compute-0 ceph-mon[74802]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:07 compute-0 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 01 13:22:07 compute-0 podman[179868]: 2025-10-01 13:22:07.532525225 +0000 UTC m=+0.085593782 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 01 13:22:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:09 compute-0 ceph-mon[74802]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:11 compute-0 ceph-mon[74802]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:22:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:22:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:22:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:22:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:22:12.285 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:22:12 compute-0 podman[183463]: 2025-10-01 13:22:12.486557085 +0000 UTC m=+0.047574212 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:22:13 compute-0 ceph-mon[74802]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:15 compute-0 ceph-mon[74802]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:17 compute-0 ceph-mon[74802]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:22:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:19 compute-0 sshd-session[187573]: Invalid user fivem from 200.7.101.139 port 32840
Oct 01 13:22:19 compute-0 ceph-mon[74802]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:19 compute-0 sshd-session[187573]: Received disconnect from 200.7.101.139 port 32840:11: Bye Bye [preauth]
Oct 01 13:22:19 compute-0 sshd-session[187573]: Disconnected from invalid user fivem 200.7.101.139 port 32840 [preauth]
Oct 01 13:22:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:21 compute-0 ceph-mon[74802]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:23 compute-0 ceph-mon[74802]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:24 compute-0 ceph-mon[74802]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:25 compute-0 sshd-session[187628]: Invalid user jing from 27.254.137.144 port 47062
Oct 01 13:22:25 compute-0 sshd-session[187628]: Received disconnect from 27.254.137.144 port 47062:11: Bye Bye [preauth]
Oct 01 13:22:25 compute-0 sshd-session[187628]: Disconnected from invalid user jing 27.254.137.144 port 47062 [preauth]
Oct 01 13:22:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:27 compute-0 ceph-mon[74802]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:29 compute-0 ceph-mon[74802]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:31 compute-0 ceph-mon[74802]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:31 compute-0 kernel: SELinux:  Converting 2766 SID table entries...
Oct 01 13:22:31 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 01 13:22:31 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 01 13:22:31 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 01 13:22:31 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 01 13:22:31 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 01 13:22:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 01 13:22:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 01 13:22:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:33 compute-0 ceph-mon[74802]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:33 compute-0 groupadd[187642]: group added to /etc/group: name=dnsmasq, GID=991
Oct 01 13:22:33 compute-0 groupadd[187642]: group added to /etc/gshadow: name=dnsmasq
Oct 01 13:22:33 compute-0 groupadd[187642]: new group: name=dnsmasq, GID=991
Oct 01 13:22:33 compute-0 useradd[187649]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 01 13:22:33 compute-0 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct 01 13:22:33 compute-0 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 01 13:22:33 compute-0 dbus-broker-launch[784]: Noticed file-system modification, trigger reload.
Oct 01 13:22:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:34 compute-0 sudo[187659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:22:34 compute-0 sudo[187659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:34 compute-0 sudo[187659]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:34 compute-0 sudo[187684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:22:34 compute-0 sudo[187684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:34 compute-0 sudo[187684]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:34 compute-0 sudo[187709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:22:34 compute-0 sudo[187709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:34 compute-0 sudo[187709]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:34 compute-0 sudo[187734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:22:34 compute-0 sudo[187734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:35 compute-0 sudo[187734]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:22:35 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:22:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:22:35 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:22:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:22:35 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:22:35 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f2f0e995-9a5f-4deb-a695-bb94464eeb69 does not exist
Oct 01 13:22:35 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c8a35c61-dc4c-4904-8559-584df1b5d10f does not exist
Oct 01 13:22:35 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 4eb54050-b67c-46cc-be43-c417fc8c5886 does not exist
Oct 01 13:22:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:22:35 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:22:35 compute-0 ceph-mon[74802]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:22:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:22:35 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:22:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:22:35 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:22:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:22:35 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:22:35 compute-0 sudo[187789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:22:35 compute-0 sudo[187789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:35 compute-0 sudo[187789]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:35 compute-0 sudo[187814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:22:35 compute-0 sudo[187814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:35 compute-0 sudo[187814]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:35 compute-0 sudo[187839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:22:35 compute-0 sudo[187839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:35 compute-0 sudo[187839]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:35 compute-0 groupadd[187871]: group added to /etc/group: name=clevis, GID=990
Oct 01 13:22:35 compute-0 groupadd[187871]: group added to /etc/gshadow: name=clevis
Oct 01 13:22:35 compute-0 sudo[187865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:22:35 compute-0 groupadd[187871]: new group: name=clevis, GID=990
Oct 01 13:22:35 compute-0 sudo[187865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:35 compute-0 useradd[187898]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 01 13:22:35 compute-0 usermod[187908]: add 'clevis' to group 'tss'
Oct 01 13:22:35 compute-0 usermod[187908]: add 'clevis' to shadow group 'tss'
Oct 01 13:22:35 compute-0 podman[187954]: 2025-10-01 13:22:35.828144024 +0000 UTC m=+0.048742890 container create 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:22:35 compute-0 systemd[1]: Started libpod-conmon-86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4.scope.
Oct 01 13:22:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:22:35 compute-0 podman[187954]: 2025-10-01 13:22:35.801954043 +0000 UTC m=+0.022552929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:22:35 compute-0 podman[187954]: 2025-10-01 13:22:35.947523521 +0000 UTC m=+0.168122397 container init 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:22:35 compute-0 podman[187954]: 2025-10-01 13:22:35.957802913 +0000 UTC m=+0.178401759 container start 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:22:35 compute-0 gifted_goldstine[187969]: 167 167
Oct 01 13:22:35 compute-0 systemd[1]: libpod-86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4.scope: Deactivated successfully.
Oct 01 13:22:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:36 compute-0 podman[187954]: 2025-10-01 13:22:36.032896049 +0000 UTC m=+0.253494915 container attach 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:22:36 compute-0 podman[187954]: 2025-10-01 13:22:36.033369274 +0000 UTC m=+0.253968130 container died 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:22:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-47f7ca6c32b1fa035030564905bdeded350af8b90f90fe853d4c59b271c6437a-merged.mount: Deactivated successfully.
Oct 01 13:22:36 compute-0 podman[187954]: 2025-10-01 13:22:36.183498006 +0000 UTC m=+0.404096872 container remove 86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:22:36 compute-0 systemd[1]: libpod-conmon-86298f023acbbd476528bc21556d25f86e9b08147bdf0b7dff74bf84426c67f4.scope: Deactivated successfully.
Oct 01 13:22:36 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:22:36 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:22:36 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:22:36 compute-0 podman[187999]: 2025-10-01 13:22:36.378569587 +0000 UTC m=+0.079093823 container create af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:22:36 compute-0 podman[187999]: 2025-10-01 13:22:36.322045703 +0000 UTC m=+0.022569979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:22:36 compute-0 systemd[1]: Started libpod-conmon-af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7.scope.
Oct 01 13:22:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:36 compute-0 podman[187999]: 2025-10-01 13:22:36.665050416 +0000 UTC m=+0.365574712 container init af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:22:36 compute-0 podman[187999]: 2025-10-01 13:22:36.672451078 +0000 UTC m=+0.372975334 container start af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:22:36 compute-0 podman[187999]: 2025-10-01 13:22:36.715002364 +0000 UTC m=+0.415526610 container attach af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:22:37 compute-0 ceph-mon[74802]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.434195) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957434259, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2043, "num_deletes": 251, "total_data_size": 3517307, "memory_usage": 3578616, "flush_reason": "Manual Compaction"}
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957520517, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3442133, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9794, "largest_seqno": 11836, "table_properties": {"data_size": 3432822, "index_size": 5933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17879, "raw_average_key_size": 19, "raw_value_size": 3414395, "raw_average_value_size": 3719, "num_data_blocks": 269, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324724, "oldest_key_time": 1759324724, "file_creation_time": 1759324957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 86362 microseconds, and 8598 cpu microseconds.
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.520559) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3442133 bytes OK
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.520589) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.522372) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.522385) EVENT_LOG_v1 {"time_micros": 1759324957522381, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.522402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3508779, prev total WAL file size 3508779, number of live WAL files 2.
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.523373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3361KB)], [26(6158KB)]
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957523399, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9748543, "oldest_snapshot_seqno": -1}
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3734 keys, 8024534 bytes, temperature: kUnknown
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957633492, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8024534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7995588, "index_size": 18532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89684, "raw_average_key_size": 24, "raw_value_size": 7924204, "raw_average_value_size": 2122, "num_data_blocks": 801, "num_entries": 3734, "num_filter_entries": 3734, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759324957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.633768) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8024534 bytes
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.635854) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.5 rd, 72.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.0 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 4248, records dropped: 514 output_compression: NoCompression
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.635869) EVENT_LOG_v1 {"time_micros": 1759324957635862, "job": 10, "event": "compaction_finished", "compaction_time_micros": 110187, "compaction_time_cpu_micros": 16760, "output_level": 6, "num_output_files": 1, "total_output_size": 8024534, "num_input_records": 4248, "num_output_records": 3734, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957636447, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759324957637365, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.523299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:22:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:22:37.637398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:22:37 compute-0 romantic_raman[188017]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:22:37 compute-0 romantic_raman[188017]: --> relative data size: 1.0
Oct 01 13:22:37 compute-0 romantic_raman[188017]: --> All data devices are unavailable
Oct 01 13:22:37 compute-0 systemd[1]: libpod-af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7.scope: Deactivated successfully.
Oct 01 13:22:37 compute-0 podman[187999]: 2025-10-01 13:22:37.752702657 +0000 UTC m=+1.453226893 container died af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:22:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6e086e89d3b829757d25744026adb6561f3ee8d955c15e5b03f95f3edc22a98-merged.mount: Deactivated successfully.
Oct 01 13:22:37 compute-0 podman[188057]: 2025-10-01 13:22:37.934188671 +0000 UTC m=+0.156567894 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:22:37 compute-0 podman[187999]: 2025-10-01 13:22:37.952881068 +0000 UTC m=+1.653405314 container remove af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:22:37 compute-0 systemd[1]: libpod-conmon-af969803ca5eaf7d48c0c22ab222d3565b9ac330c5411d2ffbaed4ed778d55f7.scope: Deactivated successfully.
Oct 01 13:22:37 compute-0 sudo[187865]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:38 compute-0 sudo[188097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:22:38 compute-0 sudo[188097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:38 compute-0 sudo[188097]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:38 compute-0 sudo[188122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:22:38 compute-0 sudo[188122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:38 compute-0 sudo[188122]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:38 compute-0 sudo[188147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:22:38 compute-0 sudo[188147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:38 compute-0 sudo[188147]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:38 compute-0 sudo[188172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:22:38 compute-0 sudo[188172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:38 compute-0 polkitd[6665]: Reloading rules
Oct 01 13:22:38 compute-0 polkitd[6665]: Collecting garbage unconditionally...
Oct 01 13:22:38 compute-0 polkitd[6665]: Loading rules from directory /etc/polkit-1/rules.d
Oct 01 13:22:38 compute-0 polkitd[6665]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 01 13:22:38 compute-0 polkitd[6665]: Finished loading, compiling and executing 4 rules
Oct 01 13:22:38 compute-0 polkitd[6665]: Reloading rules
Oct 01 13:22:38 compute-0 polkitd[6665]: Collecting garbage unconditionally...
Oct 01 13:22:38 compute-0 polkitd[6665]: Loading rules from directory /etc/polkit-1/rules.d
Oct 01 13:22:38 compute-0 polkitd[6665]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 01 13:22:38 compute-0 polkitd[6665]: Finished loading, compiling and executing 4 rules
Oct 01 13:22:38 compute-0 podman[188249]: 2025-10-01 13:22:38.514987367 +0000 UTC m=+0.036892199 container create 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:22:38 compute-0 systemd[1]: Started libpod-conmon-0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5.scope.
Oct 01 13:22:38 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:22:38 compute-0 podman[188249]: 2025-10-01 13:22:38.498234791 +0000 UTC m=+0.020139643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:22:38 compute-0 podman[188249]: 2025-10-01 13:22:38.610555966 +0000 UTC m=+0.132460818 container init 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:22:38 compute-0 podman[188249]: 2025-10-01 13:22:38.618133234 +0000 UTC m=+0.140038066 container start 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:22:38 compute-0 practical_shtern[188275]: 167 167
Oct 01 13:22:38 compute-0 systemd[1]: libpod-0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5.scope: Deactivated successfully.
Oct 01 13:22:38 compute-0 podman[188249]: 2025-10-01 13:22:38.628062865 +0000 UTC m=+0.149967727 container attach 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:22:38 compute-0 podman[188249]: 2025-10-01 13:22:38.62948439 +0000 UTC m=+0.151389222 container died 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:22:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0313f59403c167c1517e13fbb8a7252ea9b13b4717aa63064c46997d4c44cfe4-merged.mount: Deactivated successfully.
Oct 01 13:22:38 compute-0 podman[188249]: 2025-10-01 13:22:38.858021141 +0000 UTC m=+0.379925983 container remove 0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shtern, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:22:38 compute-0 systemd[1]: libpod-conmon-0d1f50f0c0b68d7fa2112d6a9ef06f980022bba822fddeaf54fc21cb10c508a5.scope: Deactivated successfully.
Oct 01 13:22:39 compute-0 podman[188353]: 2025-10-01 13:22:39.057050567 +0000 UTC m=+0.059147828 container create e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:22:39 compute-0 systemd[1]: Started libpod-conmon-e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19.scope.
Oct 01 13:22:39 compute-0 podman[188353]: 2025-10-01 13:22:39.019861789 +0000 UTC m=+0.021959080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:22:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:39 compute-0 podman[188353]: 2025-10-01 13:22:39.165269612 +0000 UTC m=+0.167366893 container init e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:22:39 compute-0 podman[188353]: 2025-10-01 13:22:39.175780543 +0000 UTC m=+0.177877804 container start e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:22:39 compute-0 podman[188353]: 2025-10-01 13:22:39.182862435 +0000 UTC m=+0.184959726 container attach e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:22:39 compute-0 ceph-mon[74802]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:39 compute-0 groupadd[188466]: group added to /etc/group: name=ceph, GID=167
Oct 01 13:22:39 compute-0 groupadd[188466]: group added to /etc/gshadow: name=ceph
Oct 01 13:22:39 compute-0 groupadd[188466]: new group: name=ceph, GID=167
Oct 01 13:22:39 compute-0 useradd[188474]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 01 13:22:39 compute-0 sleepy_turing[188375]: {
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:     "0": [
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:         {
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "devices": [
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "/dev/loop3"
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             ],
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_name": "ceph_lv0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_size": "21470642176",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "name": "ceph_lv0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "tags": {
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.cluster_name": "ceph",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.crush_device_class": "",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.encrypted": "0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.osd_id": "0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.type": "block",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.vdo": "0"
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             },
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "type": "block",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "vg_name": "ceph_vg0"
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:         }
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:     ],
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:     "1": [
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:         {
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "devices": [
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "/dev/loop4"
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             ],
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_name": "ceph_lv1",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_size": "21470642176",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "name": "ceph_lv1",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "tags": {
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.cluster_name": "ceph",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.crush_device_class": "",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.encrypted": "0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.osd_id": "1",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.type": "block",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.vdo": "0"
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             },
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "type": "block",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "vg_name": "ceph_vg1"
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:         }
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:     ],
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:     "2": [
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:         {
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "devices": [
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "/dev/loop5"
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             ],
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_name": "ceph_lv2",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_size": "21470642176",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "name": "ceph_lv2",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "tags": {
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.cluster_name": "ceph",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.crush_device_class": "",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.encrypted": "0",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.osd_id": "2",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.type": "block",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:                 "ceph.vdo": "0"
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             },
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "type": "block",
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:             "vg_name": "ceph_vg2"
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:         }
Oct 01 13:22:39 compute-0 sleepy_turing[188375]:     ]
Oct 01 13:22:39 compute-0 sleepy_turing[188375]: }
Oct 01 13:22:39 compute-0 systemd[1]: libpod-e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19.scope: Deactivated successfully.
Oct 01 13:22:39 compute-0 podman[188353]: 2025-10-01 13:22:39.948921874 +0000 UTC m=+0.951019135 container died e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:22:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb564afd50c65d7e710cf26ad88ae9f89ec287edbec607466ad4c1125d7f96cd-merged.mount: Deactivated successfully.
Oct 01 13:22:40 compute-0 podman[188353]: 2025-10-01 13:22:40.004316462 +0000 UTC m=+1.006413723 container remove e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:22:40 compute-0 systemd[1]: libpod-conmon-e48c1fbb2c0bb0f7ce2835fff919d57f575452e0427422ab910505466f5d5e19.scope: Deactivated successfully.
Oct 01 13:22:40 compute-0 sudo[188172]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:40 compute-0 sudo[188496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:22:40 compute-0 sudo[188496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:40 compute-0 sudo[188496]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:40 compute-0 sudo[188521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:22:40 compute-0 sudo[188521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:40 compute-0 sudo[188521]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:40 compute-0 sudo[188546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:22:40 compute-0 sudo[188546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:40 compute-0 sudo[188546]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:40 compute-0 sudo[188571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:22:40 compute-0 sudo[188571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:40 compute-0 podman[188637]: 2025-10-01 13:22:40.58645858 +0000 UTC m=+0.047836213 container create 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:22:40 compute-0 systemd[1]: Started libpod-conmon-1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3.scope.
Oct 01 13:22:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:22:40 compute-0 podman[188637]: 2025-10-01 13:22:40.648788216 +0000 UTC m=+0.110165879 container init 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:22:40 compute-0 podman[188637]: 2025-10-01 13:22:40.654882306 +0000 UTC m=+0.116259949 container start 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:22:40 compute-0 podman[188637]: 2025-10-01 13:22:40.561876458 +0000 UTC m=+0.023254111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:22:40 compute-0 magical_keldysh[188654]: 167 167
Oct 01 13:22:40 compute-0 systemd[1]: libpod-1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3.scope: Deactivated successfully.
Oct 01 13:22:40 compute-0 podman[188637]: 2025-10-01 13:22:40.659193082 +0000 UTC m=+0.120570715 container attach 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:22:40 compute-0 podman[188637]: 2025-10-01 13:22:40.659811772 +0000 UTC m=+0.121189415 container died 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5837606e2ff6942b9410683772c8740b21f9d917a97a3e7cec0d83db01331734-merged.mount: Deactivated successfully.
Oct 01 13:22:40 compute-0 podman[188637]: 2025-10-01 13:22:40.695643186 +0000 UTC m=+0.157020819 container remove 1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_keldysh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:22:40 compute-0 systemd[1]: libpod-conmon-1c13b1f2a757c910398839d6bdbb72fb1fa3cac55af02f5b044580ed4f0dc8d3.scope: Deactivated successfully.
Oct 01 13:22:40 compute-0 podman[188678]: 2025-10-01 13:22:40.84556533 +0000 UTC m=+0.037536768 container create c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:22:40 compute-0 systemd[1]: Started libpod-conmon-c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e.scope.
Oct 01 13:22:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:22:40 compute-0 podman[188678]: 2025-10-01 13:22:40.82769862 +0000 UTC m=+0.019670088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:22:40 compute-0 podman[188678]: 2025-10-01 13:22:40.92776302 +0000 UTC m=+0.119734478 container init c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:22:40 compute-0 podman[188678]: 2025-10-01 13:22:40.934898393 +0000 UTC m=+0.126869881 container start c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:22:40 compute-0 podman[188678]: 2025-10-01 13:22:40.938907869 +0000 UTC m=+0.130879377 container attach c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:22:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:41 compute-0 ceph-mon[74802]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:41 compute-0 boring_austin[188694]: {
Oct 01 13:22:41 compute-0 boring_austin[188694]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "osd_id": 0,
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "type": "bluestore"
Oct 01 13:22:41 compute-0 boring_austin[188694]:     },
Oct 01 13:22:41 compute-0 boring_austin[188694]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "osd_id": 2,
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "type": "bluestore"
Oct 01 13:22:41 compute-0 boring_austin[188694]:     },
Oct 01 13:22:41 compute-0 boring_austin[188694]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "osd_id": 1,
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:22:41 compute-0 boring_austin[188694]:         "type": "bluestore"
Oct 01 13:22:41 compute-0 boring_austin[188694]:     }
Oct 01 13:22:41 compute-0 boring_austin[188694]: }
Oct 01 13:22:41 compute-0 systemd[1]: libpod-c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e.scope: Deactivated successfully.
Oct 01 13:22:41 compute-0 podman[188678]: 2025-10-01 13:22:41.885435591 +0000 UTC m=+1.077407039 container died c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-68dedc8146cb88ca01fb3395fa2d38ad77d2e804da646906172961cbf7c9a34e-merged.mount: Deactivated successfully.
Oct 01 13:22:41 compute-0 podman[188678]: 2025-10-01 13:22:41.940169438 +0000 UTC m=+1.132140886 container remove c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:22:41 compute-0 systemd[1]: libpod-conmon-c6a870b7d9c6e0813ef86454e4a41792b38fc6a6b9511d17641f22b48563ce3e.scope: Deactivated successfully.
Oct 01 13:22:41 compute-0 sudo[188571]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:22:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:22:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:22:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:22:42 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 41f14e0c-3a85-46d2-9384-1518184ebd78 does not exist
Oct 01 13:22:42 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c5db93c9-22a9-4f54-af58-6b55f45d5ad0 does not exist
Oct 01 13:22:42 compute-0 sudo[189021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:22:42 compute-0 sudo[189021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:42 compute-0 sudo[189021]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:42 compute-0 sudo[189102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:22:42 compute-0 sudo[189102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:22:42 compute-0 sudo[189102]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:42 compute-0 sshd-session[188706]: Invalid user image from 80.253.31.232 port 46460
Oct 01 13:22:42 compute-0 sshd-session[188706]: Received disconnect from 80.253.31.232 port 46460:11: Bye Bye [preauth]
Oct 01 13:22:42 compute-0 sshd-session[188706]: Disconnected from invalid user image 80.253.31.232 port 46460 [preauth]
Oct 01 13:22:43 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 01 13:22:43 compute-0 sshd[1010]: Received signal 15; terminating.
Oct 01 13:22:43 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 01 13:22:43 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 01 13:22:43 compute-0 systemd[1]: sshd.service: Consumed 12.382s CPU time, read 0B from disk, written 316.0K to disk.
Oct 01 13:22:43 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 01 13:22:43 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 01 13:22:43 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 13:22:43 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 13:22:43 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 01 13:22:43 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 01 13:22:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:22:43 compute-0 ceph-mon[74802]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:22:43 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 01 13:22:43 compute-0 sshd[189423]: Server listening on 0.0.0.0 port 22.
Oct 01 13:22:43 compute-0 sshd[189423]: Server listening on :: port 22.
Oct 01 13:22:43 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 01 13:22:43 compute-0 podman[189412]: 2025-10-01 13:22:43.139541095 +0000 UTC m=+0.100968999 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:22:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:45 compute-0 ceph-mon[74802]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:46 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 13:22:46 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 13:22:46 compute-0 systemd[1]: Reloading.
Oct 01 13:22:46 compute-0 systemd-sysv-generator[189689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:22:46 compute-0 systemd-rc-local-generator[189685]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:22:46 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 13:22:47 compute-0 ceph-mon[74802]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:22:47
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'backups']
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:22:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:22:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:49 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 01 13:22:49 compute-0 PackageKit[192306]: daemon start
Oct 01 13:22:49 compute-0 ceph-mon[74802]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:49 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 01 13:22:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:50 compute-0 sudo[169318]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:50 compute-0 sudo[193687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmekdmnrqplokmppdssxdfcnsamebtsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324970.209936-336-187019961022170/AnsiballZ_systemd.py'
Oct 01 13:22:50 compute-0 sudo[193687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:22:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:51 compute-0 python3.9[193705]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 13:22:51 compute-0 systemd[1]: Reloading.
Oct 01 13:22:51 compute-0 systemd-sysv-generator[194151]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:22:51 compute-0 systemd-rc-local-generator[194147]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:22:51 compute-0 sudo[193687]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:51 compute-0 ceph-mon[74802]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:52 compute-0 sudo[194865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqgmvfzwdngnpmbcqmyrcpdrgdnwxkjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324971.7800894-336-53132940145469/AnsiballZ_systemd.py'
Oct 01 13:22:52 compute-0 sudo[194865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:22:52 compute-0 python3.9[194883]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 13:22:52 compute-0 systemd[1]: Reloading.
Oct 01 13:22:52 compute-0 systemd-rc-local-generator[195202]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:22:52 compute-0 systemd-sysv-generator[195207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:22:52 compute-0 sudo[194865]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:53 compute-0 sudo[195843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inffceqxgpbonebsnjityuxcndahbovu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324972.9573698-336-189595814889335/AnsiballZ_systemd.py'
Oct 01 13:22:53 compute-0 sudo[195843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:22:53 compute-0 python3.9[195873]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 13:22:53 compute-0 ceph-mon[74802]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:54 compute-0 systemd[1]: Reloading.
Oct 01 13:22:54 compute-0 systemd-sysv-generator[197031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:22:54 compute-0 systemd-rc-local-generator[197028]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:22:54 compute-0 ceph-mon[74802]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:55 compute-0 sudo[195843]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:55 compute-0 sudo[197739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiwnwqenwxhxuanxgzisttaegkpbilet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324975.1822214-336-61254113008060/AnsiballZ_systemd.py'
Oct 01 13:22:55 compute-0 sudo[197739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:22:55 compute-0 python3.9[197768]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 13:22:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:22:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:56 compute-0 systemd[1]: Reloading.
Oct 01 13:22:57 compute-0 systemd-sysv-generator[198691]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:22:57 compute-0 systemd-rc-local-generator[198687]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:22:57 compute-0 ceph-mon[74802]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 13:22:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 13:22:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.712s CPU time.
Oct 01 13:22:57 compute-0 systemd[1]: run-r7c7aca91b9df4ef2a4709283f7a78074.service: Deactivated successfully.
Oct 01 13:22:57 compute-0 sudo[197739]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:57 compute-0 sudo[198846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmdxcqfmwpyfcvlsiyijppdyynmphlte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324977.5236301-365-110591942219850/AnsiballZ_systemd.py'
Oct 01 13:22:57 compute-0 sudo[198846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:22:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:58 compute-0 python3.9[198848]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:22:58 compute-0 systemd[1]: Reloading.
Oct 01 13:22:58 compute-0 systemd-rc-local-generator[198877]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:22:58 compute-0 systemd-sysv-generator[198881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:22:58 compute-0 sudo[198846]: pam_unix(sudo:session): session closed for user root
Oct 01 13:22:59 compute-0 sshd-session[198887]: Invalid user nico from 156.236.31.46 port 44772
Oct 01 13:22:59 compute-0 sshd-session[198887]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:22:59 compute-0 sshd-session[198887]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46
Oct 01 13:22:59 compute-0 sudo[199038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpqzsnqszmfixxhxzvezdlgresoirkvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324978.7365708-365-50224322175437/AnsiballZ_systemd.py'
Oct 01 13:22:59 compute-0 sudo[199038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:22:59 compute-0 ceph-mon[74802]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:22:59 compute-0 python3.9[199040]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:22:59 compute-0 systemd[1]: Reloading.
Oct 01 13:22:59 compute-0 systemd-rc-local-generator[199068]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:22:59 compute-0 systemd-sysv-generator[199073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:22:59 compute-0 sudo[199038]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:00 compute-0 sudo[199228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flquziwgbpbdbvxhsepqwubhttwjmend ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324979.9929807-365-126532762500224/AnsiballZ_systemd.py'
Oct 01 13:23:00 compute-0 sudo[199228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:00 compute-0 python3.9[199230]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:00 compute-0 systemd[1]: Reloading.
Oct 01 13:23:00 compute-0 systemd-rc-local-generator[199260]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:23:00 compute-0 systemd-sysv-generator[199264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:23:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:01 compute-0 sudo[199228]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:01 compute-0 sshd-session[198887]: Failed password for invalid user nico from 156.236.31.46 port 44772 ssh2
Oct 01 13:23:01 compute-0 ceph-mon[74802]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:01 compute-0 sudo[199418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxdecwnbmqrkcusocjmaoyyvqivlfrxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324981.2357383-365-60601544979845/AnsiballZ_systemd.py'
Oct 01 13:23:01 compute-0 sudo[199418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:01 compute-0 sshd-session[198887]: Received disconnect from 156.236.31.46 port 44772:11: Bye Bye [preauth]
Oct 01 13:23:01 compute-0 sshd-session[198887]: Disconnected from invalid user nico 156.236.31.46 port 44772 [preauth]
Oct 01 13:23:01 compute-0 python3.9[199420]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:02 compute-0 sudo[199418]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:02 compute-0 sudo[199573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yncydgrgmuymldceucvbsxhtnbvblvrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324982.2074146-365-218629942276118/AnsiballZ_systemd.py'
Oct 01 13:23:02 compute-0 sudo[199573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:02 compute-0 python3.9[199575]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:03 compute-0 systemd[1]: Reloading.
Oct 01 13:23:03 compute-0 systemd-sysv-generator[199607]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:23:03 compute-0 systemd-rc-local-generator[199602]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:23:03 compute-0 sudo[199573]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:03 compute-0 ceph-mon[74802]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:04 compute-0 sudo[199763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmshrcaozylaevfxqcxlovckihvkcrue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324983.7045174-401-144053375503971/AnsiballZ_systemd.py'
Oct 01 13:23:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:04 compute-0 sudo[199763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:04 compute-0 python3.9[199765]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 01 13:23:04 compute-0 systemd[1]: Reloading.
Oct 01 13:23:04 compute-0 systemd-rc-local-generator[199796]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:23:04 compute-0 systemd-sysv-generator[199801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:23:04 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 01 13:23:04 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 01 13:23:04 compute-0 sudo[199763]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:05 compute-0 sudo[199956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjzapjvpgmyvuvezwuxuxyhutbmizqmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324985.1344144-409-160470398677996/AnsiballZ_systemd.py'
Oct 01 13:23:05 compute-0 sudo[199956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:05 compute-0 ceph-mon[74802]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:05 compute-0 python3.9[199958]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:05 compute-0 sudo[199956]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:06 compute-0 sudo[200111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vneswaxngjmcszxbqxqulohixhfnyrvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324986.068319-409-223372326509048/AnsiballZ_systemd.py'
Oct 01 13:23:06 compute-0 sudo[200111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:06 compute-0 python3.9[200113]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:06 compute-0 sudo[200111]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:07 compute-0 sudo[200266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbdnauymudgpgbprqxwqyygfluibtqst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324987.05287-409-249002503248720/AnsiballZ_systemd.py'
Oct 01 13:23:07 compute-0 sudo[200266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:07 compute-0 ceph-mon[74802]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:07 compute-0 python3.9[200268]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:07 compute-0 sudo[200266]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:08 compute-0 sudo[200431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qscidzroluzmqoanhneigxudlqtzkuea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324988.076166-409-258057387201997/AnsiballZ_systemd.py'
Oct 01 13:23:08 compute-0 sudo[200431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:08 compute-0 podman[200395]: 2025-10-01 13:23:08.53082434 +0000 UTC m=+0.139027813 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 13:23:08 compute-0 python3.9[200443]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:08 compute-0 sudo[200431]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:09 compute-0 sudo[200604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvlyvtxogqrnfekgpddlkceleegeeicx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324989.105727-409-138828519685305/AnsiballZ_systemd.py'
Oct 01 13:23:09 compute-0 sudo[200604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:09 compute-0 ceph-mon[74802]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:09 compute-0 python3.9[200606]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:09 compute-0 sudo[200604]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:10 compute-0 sudo[200759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezadhzomyrhtcvqxenrhcocsbgsaqjjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324990.1653557-409-192448207198074/AnsiballZ_systemd.py'
Oct 01 13:23:10 compute-0 sudo[200759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:10 compute-0 python3.9[200761]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:11 compute-0 sudo[200759]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:11 compute-0 ceph-mon[74802]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:11 compute-0 sudo[200914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgiaisvtwvzoywrgwxydzwmzgkelfnnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324991.2293816-409-153430671434787/AnsiballZ_systemd.py'
Oct 01 13:23:11 compute-0 sudo[200914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:12 compute-0 python3.9[200916]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:12 compute-0 sudo[200914]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:23:12.287 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:23:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:23:12.289 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:23:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:23:12.289 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:23:12 compute-0 sudo[201069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgpssgivazxikjaeukmgiaxhstsnkmxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324992.325361-409-70902925035803/AnsiballZ_systemd.py'
Oct 01 13:23:12 compute-0 sudo[201069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:13 compute-0 python3.9[201071]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:13 compute-0 ceph-mon[74802]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:13 compute-0 podman[201073]: 2025-10-01 13:23:13.530634204 +0000 UTC m=+0.080941571 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS)
Oct 01 13:23:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:14 compute-0 sudo[201069]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:14 compute-0 sudo[201243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhiiciorbxymqwldlwxtyiratqozelhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324994.332036-409-130589717751177/AnsiballZ_systemd.py'
Oct 01 13:23:14 compute-0 sudo[201243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:15 compute-0 python3.9[201245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:15 compute-0 ceph-mon[74802]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:15 compute-0 sudo[201243]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:15 compute-0 sudo[201398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcohqjbtznoxmkbyfmcbgurkrkbfvug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324995.3557994-409-95529342694746/AnsiballZ_systemd.py'
Oct 01 13:23:15 compute-0 sudo[201398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:16 compute-0 python3.9[201400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:16 compute-0 sudo[201398]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:16 compute-0 sudo[201553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujfkkshieuvbtwmgyfceyeoonxygnblb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324996.3269048-409-10016569051934/AnsiballZ_systemd.py'
Oct 01 13:23:16 compute-0 sudo[201553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:16 compute-0 python3.9[201555]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:17 compute-0 sudo[201553]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:17 compute-0 ceph-mon[74802]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:17 compute-0 sshd-session[201658]: banner exchange: Connection from 184.105.247.194 port 21770: invalid format
Oct 01 13:23:17 compute-0 sudo[201709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdnlxlbmqwotppddugiynsepliahkjsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324997.2301526-409-79532481036021/AnsiballZ_systemd.py'
Oct 01 13:23:17 compute-0 sudo[201709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:23:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:23:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:23:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:23:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:23:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:23:17 compute-0 python3.9[201711]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:18 compute-0 sudo[201709]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:18 compute-0 sudo[201864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcgkuprvcgifyrnsempwrpgckhkzzyfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324998.2501547-409-96281427934082/AnsiballZ_systemd.py'
Oct 01 13:23:18 compute-0 sudo[201864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:18 compute-0 python3.9[201866]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:19 compute-0 sudo[201864]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:19 compute-0 ceph-mon[74802]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:19 compute-0 sudo[202019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nklcnnrksgytqpvoluuqsjofbonvikiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759324999.2501714-409-39117567711193/AnsiballZ_systemd.py'
Oct 01 13:23:19 compute-0 sudo[202019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:20 compute-0 python3.9[202021]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 01 13:23:20 compute-0 sudo[202019]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:21 compute-0 sudo[202174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyaslrjiuuqekijdyiwjtftwphgnsnok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325000.6849167-511-270586846246655/AnsiballZ_file.py'
Oct 01 13:23:21 compute-0 sudo[202174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:21 compute-0 python3.9[202176]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:23:21 compute-0 sudo[202174]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:21 compute-0 ceph-mon[74802]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:21 compute-0 sudo[202326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llitdbwiedlkszhfyewmdvycbudshtnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325001.5091975-511-249887488763066/AnsiballZ_file.py'
Oct 01 13:23:21 compute-0 sudo[202326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:22 compute-0 python3.9[202328]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:23:22 compute-0 sudo[202326]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:22 compute-0 sudo[202478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiqsggnqxwurmldarwsvuzqleokmyrgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325002.3288827-511-222082836605411/AnsiballZ_file.py'
Oct 01 13:23:22 compute-0 sudo[202478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:22 compute-0 python3.9[202480]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:23:23 compute-0 sudo[202478]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:23 compute-0 sudo[202630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjryseevvxqqtgcoxnwoslcghuyclfdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325003.1751673-511-103888384458179/AnsiballZ_file.py'
Oct 01 13:23:23 compute-0 sudo[202630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:23 compute-0 ceph-mon[74802]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:23 compute-0 python3.9[202632]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:23:23 compute-0 sudo[202630]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:24 compute-0 sudo[202782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smfcowgrbgnmxthdvxptfrypfeiwbiex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325003.99718-511-227266690425604/AnsiballZ_file.py'
Oct 01 13:23:24 compute-0 sudo[202782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:24 compute-0 python3.9[202784]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:23:24 compute-0 sudo[202782]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:24 compute-0 ceph-mon[74802]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:25 compute-0 sudo[202934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsnzplewfclhwuutheqcsapwyapggeft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325004.8063467-511-16576849129483/AnsiballZ_file.py'
Oct 01 13:23:25 compute-0 sudo[202934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:25 compute-0 python3.9[202936]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:23:25 compute-0 sudo[202934]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:26 compute-0 sudo[203086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvyicrwwagxvjjvqqivgvorbguhtuiub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325005.7306333-554-194252961523567/AnsiballZ_stat.py'
Oct 01 13:23:26 compute-0 sudo[203086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:26 compute-0 python3.9[203088]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:26 compute-0 sudo[203086]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:27 compute-0 sudo[203211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbmnnyualfshdcnjdftvjxhwtrxujvft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325005.7306333-554-194252961523567/AnsiballZ_copy.py'
Oct 01 13:23:27 compute-0 sudo[203211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:27 compute-0 ceph-mon[74802]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:27 compute-0 python3.9[203213]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325005.7306333-554-194252961523567/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:27 compute-0 sudo[203211]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:28 compute-0 sudo[203363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qomxmpwdbmjmjbjdkgsunmifnsaekdlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325007.7551806-554-56120987708746/AnsiballZ_stat.py'
Oct 01 13:23:28 compute-0 sudo[203363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:28 compute-0 python3.9[203365]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:28 compute-0 sudo[203363]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:29 compute-0 sudo[203488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxuktwdwgunfuxqfvffhtxahoqeasnew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325007.7551806-554-56120987708746/AnsiballZ_copy.py'
Oct 01 13:23:29 compute-0 sudo[203488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:29 compute-0 python3.9[203490]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325007.7551806-554-56120987708746/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:29 compute-0 sudo[203488]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:29 compute-0 ceph-mon[74802]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:29 compute-0 sudo[203640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clyjfbplkiyrbmkfwezgbufwfvxhcdzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325009.527832-554-24432632567439/AnsiballZ_stat.py'
Oct 01 13:23:29 compute-0 sudo[203640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:30 compute-0 python3.9[203642]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:30 compute-0 sudo[203640]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:30 compute-0 sudo[203765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dndfeflrnmnbkkrquyyzpfcxfsbrmtip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325009.527832-554-24432632567439/AnsiballZ_copy.py'
Oct 01 13:23:30 compute-0 sudo[203765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:30 compute-0 python3.9[203767]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325009.527832-554-24432632567439/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:31 compute-0 sudo[203765]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:31 compute-0 ceph-mon[74802]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:31 compute-0 sudo[203917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqszhurimpkhocymnhtfhrtjweixdzia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325011.2296195-554-25795081733247/AnsiballZ_stat.py'
Oct 01 13:23:31 compute-0 sudo[203917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:31 compute-0 python3.9[203919]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:31 compute-0 sudo[203917]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:32 compute-0 sudo[204042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msiibzypelsnegqgbjqcyalesnbjjaqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325011.2296195-554-25795081733247/AnsiballZ_copy.py'
Oct 01 13:23:32 compute-0 sudo[204042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:32 compute-0 python3.9[204044]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325011.2296195-554-25795081733247/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:32 compute-0 sudo[204042]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:33 compute-0 sudo[204194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzewwbvhoosnivhlrmtdsftjlwkwsnte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325012.9134412-554-116679003871970/AnsiballZ_stat.py'
Oct 01 13:23:33 compute-0 sudo[204194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:33 compute-0 python3.9[204196]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:33 compute-0 sudo[204194]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:33 compute-0 ceph-mon[74802]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:34 compute-0 sudo[204319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iryqsbvdjqucsqetghvqxcclcbhivjci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325012.9134412-554-116679003871970/AnsiballZ_copy.py'
Oct 01 13:23:34 compute-0 sudo[204319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:34 compute-0 sshd-session[204322]: Invalid user aurora from 200.7.101.139 port 43392
Oct 01 13:23:34 compute-0 python3.9[204321]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325012.9134412-554-116679003871970/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:34 compute-0 sshd-session[204322]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:23:34 compute-0 sshd-session[204322]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139
Oct 01 13:23:34 compute-0 sudo[204319]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:35 compute-0 sudo[204473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgecpuukhtjxkdtddfgnlckfymhwjtrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325014.8816445-554-165862113291073/AnsiballZ_stat.py'
Oct 01 13:23:35 compute-0 sudo[204473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:35 compute-0 ceph-mon[74802]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:35 compute-0 python3.9[204475]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:35 compute-0 sudo[204473]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:36 compute-0 sudo[204598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qljobjopfhqmddsymeysvovpworucwrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325014.8816445-554-165862113291073/AnsiballZ_copy.py'
Oct 01 13:23:36 compute-0 sudo[204598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:36 compute-0 python3.9[204600]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325014.8816445-554-165862113291073/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:36 compute-0 sudo[204598]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:36 compute-0 sshd-session[204322]: Failed password for invalid user aurora from 200.7.101.139 port 43392 ssh2
Oct 01 13:23:37 compute-0 sshd-session[204322]: Received disconnect from 200.7.101.139 port 43392:11: Bye Bye [preauth]
Oct 01 13:23:37 compute-0 sshd-session[204322]: Disconnected from invalid user aurora 200.7.101.139 port 43392 [preauth]
Oct 01 13:23:37 compute-0 sudo[204752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lutvdzyxlfswerafnlbwdxyezbytmdph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325016.6788673-554-159086594126385/AnsiballZ_stat.py'
Oct 01 13:23:37 compute-0 sudo[204752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:37 compute-0 python3.9[204754]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:37 compute-0 sudo[204752]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:37 compute-0 ceph-mon[74802]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:37 compute-0 sudo[204875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiixutjojecfjlrmemmuhaqijdzyvnwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325016.6788673-554-159086594126385/AnsiballZ_copy.py'
Oct 01 13:23:37 compute-0 sudo[204875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:38 compute-0 python3.9[204877]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325016.6788673-554-159086594126385/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:38 compute-0 sudo[204875]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:38 compute-0 sudo[205027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlgnduayacyocowgjrwpneeyfesfhcaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325018.2714937-554-22007393710970/AnsiballZ_stat.py'
Oct 01 13:23:38 compute-0 sudo[205027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:38 compute-0 unix_chkpwd[205054]: password check failed for user (root)
Oct 01 13:23:38 compute-0 sshd-session[204646]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144  user=root
Oct 01 13:23:38 compute-0 podman[205029]: 2025-10-01 13:23:38.825829617 +0000 UTC m=+0.170675929 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923)
Oct 01 13:23:38 compute-0 python3.9[205030]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:38 compute-0 sudo[205027]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:39 compute-0 sudo[205179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pawvkywvqdxxxkpyxzselqcblpgrexfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325018.2714937-554-22007393710970/AnsiballZ_copy.py'
Oct 01 13:23:39 compute-0 sudo[205179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:39 compute-0 python3.9[205181]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759325018.2714937-554-22007393710970/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:39 compute-0 sudo[205179]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:40 compute-0 ceph-mon[74802]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:40 compute-0 sudo[205331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrgocbskafpfudlvonmaskjpyitpqiba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325019.9098613-667-131506975678410/AnsiballZ_command.py'
Oct 01 13:23:40 compute-0 sudo[205331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:40 compute-0 python3.9[205333]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 01 13:23:40 compute-0 sudo[205331]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:40 compute-0 sshd-session[204646]: Failed password for root from 27.254.137.144 port 42700 ssh2
Oct 01 13:23:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:41 compute-0 sudo[205486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvxjzfdysvgdshwumttorjibhqxdqvxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325021.01629-676-233146164833489/AnsiballZ_file.py'
Oct 01 13:23:41 compute-0 sudo[205486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:41 compute-0 ceph-mon[74802]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:41 compute-0 python3.9[205488]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:41 compute-0 sudo[205486]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:41 compute-0 unix_chkpwd[205536]: password check failed for user (root)
Oct 01 13:23:41 compute-0 sshd-session[205424]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232  user=root
Oct 01 13:23:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:42 compute-0 sudo[205639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weubkmmdtifslbwlwsipqruptfrwlajn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325021.8751774-676-74022672176136/AnsiballZ_file.py'
Oct 01 13:23:42 compute-0 sudo[205639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:42 compute-0 sudo[205640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:23:42 compute-0 sudo[205640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:42 compute-0 sudo[205640]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:42 compute-0 sudo[205667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:23:42 compute-0 sudo[205667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:42 compute-0 sudo[205667]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:42 compute-0 sudo[205692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:23:42 compute-0 sudo[205692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:42 compute-0 sudo[205692]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:42 compute-0 python3.9[205650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:42 compute-0 sudo[205717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:23:42 compute-0 sudo[205717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:42 compute-0 sudo[205639]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:42 compute-0 sshd-session[204646]: Received disconnect from 27.254.137.144 port 42700:11: Bye Bye [preauth]
Oct 01 13:23:42 compute-0 sshd-session[204646]: Disconnected from authenticating user root 27.254.137.144 port 42700 [preauth]
Oct 01 13:23:42 compute-0 sudo[205717]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:42 compute-0 sudo[205923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbccugzjhqwzedryamkarhotokgubvxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325022.6068952-676-161623464827831/AnsiballZ_file.py'
Oct 01 13:23:42 compute-0 sudo[205923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:23:43 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:23:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:23:43 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:23:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:23:43 compute-0 python3.9[205925]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:43 compute-0 sudo[205923]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:43 compute-0 ceph-mon[74802]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:43 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:23:43 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 95d593c5-19b8-471c-88e1-8870bf7d6cb1 does not exist
Oct 01 13:23:43 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a177e8bc-53b3-4e42-8604-cb54771e19e4 does not exist
Oct 01 13:23:43 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 032432ac-0e98-4f97-b989-bdc9b6ca3746 does not exist
Oct 01 13:23:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:23:43 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:23:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:23:43 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:23:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:23:43 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:23:43 compute-0 sudo[205971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:23:43 compute-0 sudo[205971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:43 compute-0 sudo[205971]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:43 compute-0 sudo[206027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:23:43 compute-0 sudo[206027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:43 compute-0 sudo[206027]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:43 compute-0 sshd-session[205424]: Failed password for root from 80.253.31.232 port 58550 ssh2
Oct 01 13:23:43 compute-0 sudo[206081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:23:43 compute-0 sudo[206081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:43 compute-0 podman[206074]: 2025-10-01 13:23:43.707886127 +0000 UTC m=+0.074690457 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 01 13:23:43 compute-0 sudo[206081]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:43 compute-0 sudo[206190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzdiwvdcsohhspfbyejwpqfatxbexfbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325023.4744887-676-13242642864047/AnsiballZ_file.py'
Oct 01 13:23:43 compute-0 sudo[206145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:23:43 compute-0 sudo[206190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:43 compute-0 sudo[206145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:43 compute-0 sshd-session[205424]: Received disconnect from 80.253.31.232 port 58550:11: Bye Bye [preauth]
Oct 01 13:23:43 compute-0 sshd-session[205424]: Disconnected from authenticating user root 80.253.31.232 port 58550 [preauth]
Oct 01 13:23:43 compute-0 python3.9[206193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:44 compute-0 sudo[206190]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:44 compute-0 podman[206250]: 2025-10-01 13:23:44.107962019 +0000 UTC m=+0.026115638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:23:44 compute-0 podman[206250]: 2025-10-01 13:23:44.212120207 +0000 UTC m=+0.130273816 container create 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:23:44 compute-0 systemd[1]: Started libpod-conmon-6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87.scope.
Oct 01 13:23:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:23:44 compute-0 podman[206250]: 2025-10-01 13:23:44.393302353 +0000 UTC m=+0.311456042 container init 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:23:44 compute-0 podman[206250]: 2025-10-01 13:23:44.404280216 +0000 UTC m=+0.322433825 container start 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:23:44 compute-0 awesome_mendeleev[206353]: 167 167
Oct 01 13:23:44 compute-0 systemd[1]: libpod-6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87.scope: Deactivated successfully.
Oct 01 13:23:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:23:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:23:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:23:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:23:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:23:44 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:23:44 compute-0 podman[206250]: 2025-10-01 13:23:44.422269159 +0000 UTC m=+0.340422778 container attach 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:23:44 compute-0 podman[206250]: 2025-10-01 13:23:44.423323252 +0000 UTC m=+0.341476861 container died 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:23:44 compute-0 sudo[206414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmyejkohblhrlhglkzjfdwtjkccaalfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325024.1634624-676-170460218234007/AnsiballZ_file.py'
Oct 01 13:23:44 compute-0 sudo[206414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e172d2f2e749272905e52d7ecfa992f91dbc03674015789f55b57273d6136daa-merged.mount: Deactivated successfully.
Oct 01 13:23:44 compute-0 python3.9[206422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:44 compute-0 sudo[206414]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:44 compute-0 podman[206250]: 2025-10-01 13:23:44.801085516 +0000 UTC m=+0.719239155 container remove 6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:23:44 compute-0 systemd[1]: libpod-conmon-6e0b22655ce674e8e675f80f497c6041a09381c36ad66fd20dc4c80a1b953d87.scope: Deactivated successfully.
Oct 01 13:23:44 compute-0 podman[206507]: 2025-10-01 13:23:44.995721314 +0000 UTC m=+0.065567792 container create c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:23:45 compute-0 podman[206507]: 2025-10-01 13:23:44.953087611 +0000 UTC m=+0.022934089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:23:45 compute-0 systemd[1]: Started libpod-conmon-c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4.scope.
Oct 01 13:23:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:45 compute-0 podman[206507]: 2025-10-01 13:23:45.135222037 +0000 UTC m=+0.205068535 container init c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:23:45 compute-0 podman[206507]: 2025-10-01 13:23:45.142818434 +0000 UTC m=+0.212664892 container start c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:23:45 compute-0 sudo[206600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utpdmppbceiacxjhizsrepvyxphznbtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325024.8470075-676-254801402704848/AnsiballZ_file.py'
Oct 01 13:23:45 compute-0 sudo[206600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:45 compute-0 podman[206507]: 2025-10-01 13:23:45.167088844 +0000 UTC m=+0.236935322 container attach c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 13:23:45 compute-0 python3.9[206604]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:45 compute-0 sudo[206600]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:45 compute-0 ceph-mon[74802]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:45 compute-0 sudo[206754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmmaujhlcwkvivfvbympybutptszospu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325025.548042-676-213760462956968/AnsiballZ_file.py'
Oct 01 13:23:45 compute-0 sudo[206754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:46 compute-0 python3.9[206757]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:46 compute-0 sudo[206754]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:46 compute-0 dazzling_wilbur[206571]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:23:46 compute-0 dazzling_wilbur[206571]: --> relative data size: 1.0
Oct 01 13:23:46 compute-0 dazzling_wilbur[206571]: --> All data devices are unavailable
Oct 01 13:23:46 compute-0 systemd[1]: libpod-c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4.scope: Deactivated successfully.
Oct 01 13:23:46 compute-0 systemd[1]: libpod-c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4.scope: Consumed 1.150s CPU time.
Oct 01 13:23:46 compute-0 podman[206507]: 2025-10-01 13:23:46.38163941 +0000 UTC m=+1.451485908 container died c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c9896fefcd30a49f4623851fd4a9c763522a4cea61cbdde26933c205e5e78b6-merged.mount: Deactivated successfully.
Oct 01 13:23:46 compute-0 sudo[206944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oipwuwjjuzbrmtalspbfhdwtacoarvol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325026.2530801-676-221028458469097/AnsiballZ_file.py'
Oct 01 13:23:46 compute-0 sudo[206944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:46 compute-0 podman[206507]: 2025-10-01 13:23:46.803644928 +0000 UTC m=+1.873491386 container remove c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilbur, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:23:46 compute-0 systemd[1]: libpod-conmon-c8f1fe582d123af2faaea4a86407b89941af3a29e6ff038768e909da5229cfb4.scope: Deactivated successfully.
Oct 01 13:23:46 compute-0 python3.9[206946]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:46 compute-0 sudo[206145]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:46 compute-0 sudo[206944]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:46 compute-0 sudo[206947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:23:46 compute-0 sudo[206947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:46 compute-0 sudo[206947]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:46 compute-0 sudo[206995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:23:46 compute-0 sudo[206995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:46 compute-0 sudo[206995]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:47 compute-0 sudo[207044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:23:47 compute-0 sudo[207044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:47 compute-0 sudo[207044]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:47 compute-0 sudo[207098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:23:47 compute-0 sudo[207098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:47 compute-0 sudo[207198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqlilnqcbyhpphqzexpsfgeemuoiiseo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325026.9755666-676-6254906035255/AnsiballZ_file.py'
Oct 01 13:23:47 compute-0 sudo[207198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:47 compute-0 python3.9[207206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:47 compute-0 sudo[207198]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:47 compute-0 podman[207238]: 2025-10-01 13:23:47.503894499 +0000 UTC m=+0.041028664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:23:47 compute-0 ceph-mon[74802]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:23:47
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta']
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:23:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:23:47 compute-0 podman[207238]: 2025-10-01 13:23:47.90198951 +0000 UTC m=+0.439123675 container create e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:23:47 compute-0 sudo[207402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxvdbcdblbowjhkoripxyqyfxvnfhrdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325027.6521428-676-79889026234306/AnsiballZ_file.py'
Oct 01 13:23:47 compute-0 sudo[207402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:48 compute-0 systemd[1]: Started libpod-conmon-e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728.scope.
Oct 01 13:23:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:23:48 compute-0 podman[207238]: 2025-10-01 13:23:48.340100151 +0000 UTC m=+0.877234346 container init e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:23:48 compute-0 podman[207238]: 2025-10-01 13:23:48.354010586 +0000 UTC m=+0.891144741 container start e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:23:48 compute-0 awesome_volhard[207407]: 167 167
Oct 01 13:23:48 compute-0 systemd[1]: libpod-e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728.scope: Deactivated successfully.
Oct 01 13:23:48 compute-0 python3.9[207404]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:48 compute-0 sudo[207402]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:48 compute-0 podman[207238]: 2025-10-01 13:23:48.505441813 +0000 UTC m=+1.042575988 container attach e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:23:48 compute-0 podman[207238]: 2025-10-01 13:23:48.506063692 +0000 UTC m=+1.043197867 container died e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 13:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-02c434fbeb79d0abbc72a980dd16a3a4f41c97180dda940963650112b0ed24d7-merged.mount: Deactivated successfully.
Oct 01 13:23:49 compute-0 sudo[207576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hthhetihwdkovnkenuhadxullivayxel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325028.7091045-676-131247524600209/AnsiballZ_file.py'
Oct 01 13:23:49 compute-0 sudo[207576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:49 compute-0 python3.9[207578]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:49 compute-0 sudo[207576]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:49 compute-0 podman[207238]: 2025-10-01 13:23:49.481979854 +0000 UTC m=+2.019114059 container remove e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_volhard, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 13:23:49 compute-0 systemd[1]: libpod-conmon-e7b7b30de1065939142b78d01f4321988a5bb5f9b0807507a787f307c6f41728.scope: Deactivated successfully.
Oct 01 13:23:49 compute-0 podman[207676]: 2025-10-01 13:23:49.632321337 +0000 UTC m=+0.025501939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:23:49 compute-0 sudo[207749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bthluzinirznksztcudhrclnlgnvyojc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325029.452371-676-223657099085686/AnsiballZ_file.py'
Oct 01 13:23:49 compute-0 sudo[207749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:49 compute-0 podman[207676]: 2025-10-01 13:23:49.796197322 +0000 UTC m=+0.189377914 container create 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:23:50 compute-0 ceph-mon[74802]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:50 compute-0 systemd[1]: Started libpod-conmon-9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c.scope.
Oct 01 13:23:50 compute-0 python3.9[207751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:50 compute-0 sudo[207749]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:50 compute-0 podman[207676]: 2025-10-01 13:23:50.213888065 +0000 UTC m=+0.607068667 container init 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:23:50 compute-0 podman[207676]: 2025-10-01 13:23:50.226080007 +0000 UTC m=+0.619260569 container start 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:23:50 compute-0 podman[207676]: 2025-10-01 13:23:50.372025982 +0000 UTC m=+0.765206534 container attach 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:23:50 compute-0 sudo[207909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksambzteujpjarnhnblokaonliywwuzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325030.248125-676-36834234132841/AnsiballZ_file.py'
Oct 01 13:23:50 compute-0 sudo[207909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:50 compute-0 python3.9[207911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:50 compute-0 sudo[207909]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:51 compute-0 zealous_banach[207755]: {
Oct 01 13:23:51 compute-0 zealous_banach[207755]:     "0": [
Oct 01 13:23:51 compute-0 zealous_banach[207755]:         {
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "devices": [
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "/dev/loop3"
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             ],
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_name": "ceph_lv0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_size": "21470642176",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "name": "ceph_lv0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "tags": {
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.cluster_name": "ceph",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.crush_device_class": "",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.encrypted": "0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.osd_id": "0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.type": "block",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.vdo": "0"
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             },
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "type": "block",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "vg_name": "ceph_vg0"
Oct 01 13:23:51 compute-0 zealous_banach[207755]:         }
Oct 01 13:23:51 compute-0 zealous_banach[207755]:     ],
Oct 01 13:23:51 compute-0 zealous_banach[207755]:     "1": [
Oct 01 13:23:51 compute-0 zealous_banach[207755]:         {
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "devices": [
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "/dev/loop4"
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             ],
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_name": "ceph_lv1",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_size": "21470642176",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "name": "ceph_lv1",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "tags": {
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.cluster_name": "ceph",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.crush_device_class": "",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.encrypted": "0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.osd_id": "1",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.type": "block",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.vdo": "0"
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             },
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "type": "block",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "vg_name": "ceph_vg1"
Oct 01 13:23:51 compute-0 zealous_banach[207755]:         }
Oct 01 13:23:51 compute-0 zealous_banach[207755]:     ],
Oct 01 13:23:51 compute-0 zealous_banach[207755]:     "2": [
Oct 01 13:23:51 compute-0 zealous_banach[207755]:         {
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "devices": [
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "/dev/loop5"
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             ],
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_name": "ceph_lv2",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_size": "21470642176",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "name": "ceph_lv2",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "tags": {
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.cluster_name": "ceph",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.crush_device_class": "",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.encrypted": "0",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.osd_id": "2",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.type": "block",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:                 "ceph.vdo": "0"
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             },
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "type": "block",
Oct 01 13:23:51 compute-0 zealous_banach[207755]:             "vg_name": "ceph_vg2"
Oct 01 13:23:51 compute-0 zealous_banach[207755]:         }
Oct 01 13:23:51 compute-0 zealous_banach[207755]:     ]
Oct 01 13:23:51 compute-0 zealous_banach[207755]: }
Oct 01 13:23:51 compute-0 systemd[1]: libpod-9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c.scope: Deactivated successfully.
Oct 01 13:23:51 compute-0 podman[207676]: 2025-10-01 13:23:51.148488205 +0000 UTC m=+1.541668757 container died 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:23:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:51 compute-0 sudo[208076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzoajuquoyysosqjxfevlbassurdtrwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325031.0778756-676-113005291768768/AnsiballZ_file.py'
Oct 01 13:23:51 compute-0 sudo[208076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:51 compute-0 python3.9[208078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:51 compute-0 sudo[208076]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:51 compute-0 ceph-mon[74802]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:52 compute-0 sudo[208231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njoyvtehgwdllvzwxzkpngtbsskmmpch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325031.8448923-775-27922788647184/AnsiballZ_stat.py'
Oct 01 13:23:52 compute-0 sudo[208231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:52 compute-0 python3.9[208233]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:52 compute-0 sudo[208231]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a97b1298f077d92990d2f7a3ba192e849e4dec35c4427ebc43c9db5b6425285a-merged.mount: Deactivated successfully.
Oct 01 13:23:52 compute-0 sudo[208354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtffsgaaxqvzpmdadqctzwalhsknqsvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325031.8448923-775-27922788647184/AnsiballZ_copy.py'
Oct 01 13:23:52 compute-0 sudo[208354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:53 compute-0 python3.9[208356]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325031.8448923-775-27922788647184/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:53 compute-0 ceph-mon[74802]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:53 compute-0 podman[207676]: 2025-10-01 13:23:53.112751649 +0000 UTC m=+3.505932201 container remove 9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:23:53 compute-0 sudo[208354]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:53 compute-0 systemd[1]: libpod-conmon-9b7fc291ae588cdb7c7659d451b9397ce4dee35dd1a1fd4e2c94b12cded8093c.scope: Deactivated successfully.
Oct 01 13:23:53 compute-0 sudo[207098]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:53 compute-0 sudo[208369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:23:53 compute-0 sudo[208369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:53 compute-0 sudo[208369]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:53 compute-0 sudo[208412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:23:53 compute-0 sudo[208412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:53 compute-0 sudo[208412]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:53 compute-0 sudo[208466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:23:53 compute-0 sudo[208466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:53 compute-0 sudo[208466]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:53 compute-0 sudo[208511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:23:53 compute-0 sudo[208511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:53 compute-0 sudo[208617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trarorxgncmtxcjfxsybpmwssicvjwmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325033.291063-775-122841392076707/AnsiballZ_stat.py'
Oct 01 13:23:53 compute-0 sudo[208617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:53 compute-0 podman[208648]: 2025-10-01 13:23:53.793384166 +0000 UTC m=+0.028131260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:23:53 compute-0 python3.9[208626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:53 compute-0 podman[208648]: 2025-10-01 13:23:53.923840456 +0000 UTC m=+0.158587480 container create b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:23:53 compute-0 sudo[208617]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:54 compute-0 systemd[1]: Started libpod-conmon-b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e.scope.
Oct 01 13:23:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:23:54 compute-0 sudo[208787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orfelesqzmnmfjbswakrlddfkqyhahiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325033.291063-775-122841392076707/AnsiballZ_copy.py'
Oct 01 13:23:54 compute-0 sudo[208787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:54 compute-0 podman[208648]: 2025-10-01 13:23:54.585022545 +0000 UTC m=+0.819769599 container init b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:23:54 compute-0 python3.9[208789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325033.291063-775-122841392076707/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:54 compute-0 podman[208648]: 2025-10-01 13:23:54.599366164 +0000 UTC m=+0.834113208 container start b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 13:23:54 compute-0 lucid_bartik[208711]: 167 167
Oct 01 13:23:54 compute-0 systemd[1]: libpod-b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e.scope: Deactivated successfully.
Oct 01 13:23:54 compute-0 sudo[208787]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:54 compute-0 podman[208648]: 2025-10-01 13:23:54.785572347 +0000 UTC m=+1.020319381 container attach b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 13:23:54 compute-0 podman[208648]: 2025-10-01 13:23:54.786770265 +0000 UTC m=+1.021517309 container died b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:23:55 compute-0 sudo[208955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkkpeauobtksdmorwqgxasptktabcxxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325034.8048651-775-24907974328116/AnsiballZ_stat.py'
Oct 01 13:23:55 compute-0 sudo[208955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cc609d8585343811c902111e2ff63fdf7abd313f068fa24f3de90319e2a1240-merged.mount: Deactivated successfully.
Oct 01 13:23:55 compute-0 python3.9[208957]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:55 compute-0 sudo[208955]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:55 compute-0 podman[208648]: 2025-10-01 13:23:55.750164376 +0000 UTC m=+1.984911400 container remove b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bartik, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:23:55 compute-0 systemd[1]: libpod-conmon-b8cbdd499563c34cbcc67eb6fe5820309065c906690f32c5e065117cca27b47e.scope: Deactivated successfully.
Oct 01 13:23:55 compute-0 sudo[209080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bradcceecdoqkzzdoplxgbrfovfcytef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325034.8048651-775-24907974328116/AnsiballZ_copy.py'
Oct 01 13:23:55 compute-0 sudo[209080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:55 compute-0 ceph-mon[74802]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:56 compute-0 podman[209088]: 2025-10-01 13:23:55.936167553 +0000 UTC m=+0.040455116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:23:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:56 compute-0 python3.9[209082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325034.8048651-775-24907974328116/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:56 compute-0 sudo[209080]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:23:56 compute-0 podman[209088]: 2025-10-01 13:23:56.224979126 +0000 UTC m=+0.329266589 container create d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:23:56 compute-0 systemd[1]: Started libpod-conmon-d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc.scope.
Oct 01 13:23:56 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:23:56 compute-0 sudo[209256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrmesclvpusaiymkebnkrvymcxrpcvnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325036.3326578-775-243850441185896/AnsiballZ_stat.py'
Oct 01 13:23:56 compute-0 sudo[209256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:56 compute-0 podman[209088]: 2025-10-01 13:23:56.898659255 +0000 UTC m=+1.002946748 container init d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:23:56 compute-0 python3.9[209258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:56 compute-0 podman[209088]: 2025-10-01 13:23:56.907955816 +0000 UTC m=+1.012243309 container start d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 13:23:56 compute-0 sudo[209256]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:23:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:23:57 compute-0 podman[209088]: 2025-10-01 13:23:57.173825751 +0000 UTC m=+1.278113224 container attach d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:23:57 compute-0 ceph-mon[74802]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:57 compute-0 sudo[209381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrotkdgipefwdnmkynvksqknlharvogb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325036.3326578-775-243850441185896/AnsiballZ_copy.py'
Oct 01 13:23:57 compute-0 sudo[209381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:57 compute-0 python3.9[209383]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325036.3326578-775-243850441185896/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:57 compute-0 sudo[209381]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]: {
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "osd_id": 0,
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "type": "bluestore"
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:     },
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "osd_id": 2,
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "type": "bluestore"
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:     },
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "osd_id": 1,
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:         "type": "bluestore"
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]:     }
Oct 01 13:23:57 compute-0 dreamy_shtern[209206]: }
Oct 01 13:23:57 compute-0 systemd[1]: libpod-d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc.scope: Deactivated successfully.
Oct 01 13:23:57 compute-0 podman[209088]: 2025-10-01 13:23:57.923966822 +0000 UTC m=+2.028254305 container died d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct 01 13:23:57 compute-0 systemd[1]: libpod-d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc.scope: Consumed 1.017s CPU time.
Oct 01 13:23:58 compute-0 sudo[209573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aphgloiaxuwgfhxevfxdassqsjemcqbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325037.7195654-775-182480378900317/AnsiballZ_stat.py'
Oct 01 13:23:58 compute-0 sudo[209573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:58 compute-0 python3.9[209575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:58 compute-0 sudo[209573]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b738be4437ba136b2ef41fcda174de719f2bd1449a3bb41f75bfce3d103a25c4-merged.mount: Deactivated successfully.
Oct 01 13:23:58 compute-0 sudo[209699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujqjrxsdmjgknwdrkyccvvczrdtskavz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325037.7195654-775-182480378900317/AnsiballZ_copy.py'
Oct 01 13:23:58 compute-0 sudo[209699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:58 compute-0 python3.9[209701]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325037.7195654-775-182480378900317/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:23:58 compute-0 sudo[209699]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:59 compute-0 podman[209088]: 2025-10-01 13:23:59.155154388 +0000 UTC m=+3.259441851 container remove d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:23:59 compute-0 systemd[1]: libpod-conmon-d9d26c8bc6694b93776bb2500c650f90c45e6dfbda9b76c19dca23fd3f51fbbc.scope: Deactivated successfully.
Oct 01 13:23:59 compute-0 sudo[208511]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:23:59 compute-0 ceph-mon[74802]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:23:59 compute-0 sudo[209851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwklgbtkgnrdmxnebpzagyqtjorzbhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325039.041347-775-19637778349182/AnsiballZ_stat.py'
Oct 01 13:23:59 compute-0 sudo[209851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:59 compute-0 python3.9[209853]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:23:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:23:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:23:59 compute-0 sudo[209851]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:23:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9369c22b-fe97-40b3-9d2c-974befbac54c does not exist
Oct 01 13:23:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev cb6f7ccb-3cae-42db-8989-54d0c28f90a9 does not exist
Oct 01 13:23:59 compute-0 sudo[209935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:23:59 compute-0 sudo[209935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:59 compute-0 sudo[209935]: pam_unix(sudo:session): session closed for user root
Oct 01 13:23:59 compute-0 sudo[210019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yldlrkjoluuuenapbbzmlhmafrghcnsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325039.041347-775-19637778349182/AnsiballZ_copy.py'
Oct 01 13:23:59 compute-0 sudo[210019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:23:59 compute-0 sudo[209984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:23:59 compute-0 sudo[209984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:23:59 compute-0 sudo[209984]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:00 compute-0 python3.9[210024]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325039.041347-775-19637778349182/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:00 compute-0 sudo[210019]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:00 compute-0 sudo[210176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwwhyuzjsjgguuopyvugbtopgvxqjshh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325040.3530219-775-61896809371182/AnsiballZ_stat.py'
Oct 01 13:24:00 compute-0 sudo[210176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:24:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:24:00 compute-0 python3.9[210178]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:00 compute-0 sudo[210176]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:01 compute-0 sudo[210299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfurtchqmzoqwmrkybhgxgvypyvsyntb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325040.3530219-775-61896809371182/AnsiballZ_copy.py'
Oct 01 13:24:01 compute-0 sudo[210299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:01 compute-0 python3.9[210301]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325040.3530219-775-61896809371182/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:01 compute-0 sudo[210299]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:01 compute-0 ceph-mon[74802]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:01 compute-0 sudo[210453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udsegmvkakcqisttsidsddsabwrgllpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325041.650386-775-65738693192394/AnsiballZ_stat.py'
Oct 01 13:24:02 compute-0 sudo[210453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:02 compute-0 python3.9[210455]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:02 compute-0 sudo[210453]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:02 compute-0 sshd-session[210349]: Invalid user mcserver from 156.236.31.46 port 44858
Oct 01 13:24:02 compute-0 sshd-session[210349]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:24:02 compute-0 sshd-session[210349]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46
Oct 01 13:24:02 compute-0 sudo[210576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drzizlknxsbzdnyyhcghycbsjzmluzgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325041.650386-775-65738693192394/AnsiballZ_copy.py'
Oct 01 13:24:02 compute-0 sudo[210576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:02 compute-0 python3.9[210578]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325041.650386-775-65738693192394/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:02 compute-0 sudo[210576]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:03 compute-0 ceph-mon[74802]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:03 compute-0 sudo[210728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzucltbqstszmqgwxlxtjnbrfiaeskys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325043.0389986-775-137470776848184/AnsiballZ_stat.py'
Oct 01 13:24:03 compute-0 sudo[210728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:03 compute-0 python3.9[210730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:03 compute-0 sudo[210728]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:04 compute-0 sudo[210851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shnznmrlallxwohcpgfmctetwtnczbhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325043.0389986-775-137470776848184/AnsiballZ_copy.py'
Oct 01 13:24:04 compute-0 sudo[210851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:04 compute-0 python3.9[210853]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325043.0389986-775-137470776848184/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:04 compute-0 sudo[210851]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:04 compute-0 sshd-session[210349]: Failed password for invalid user mcserver from 156.236.31.46 port 44858 ssh2
Oct 01 13:24:04 compute-0 sshd-session[210349]: Received disconnect from 156.236.31.46 port 44858:11: Bye Bye [preauth]
Oct 01 13:24:04 compute-0 sshd-session[210349]: Disconnected from invalid user mcserver 156.236.31.46 port 44858 [preauth]
Oct 01 13:24:04 compute-0 sudo[211003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgzxqxmubqykrxmwvkzpyhkxdiukcbup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325044.4371088-775-246016588746173/AnsiballZ_stat.py'
Oct 01 13:24:04 compute-0 sudo[211003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:05 compute-0 python3.9[211005]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:05 compute-0 sudo[211003]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:05 compute-0 ceph-mon[74802]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:05 compute-0 sudo[211126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqijhvmkcjuetyqoegvsgemyreagnpgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325044.4371088-775-246016588746173/AnsiballZ_copy.py'
Oct 01 13:24:05 compute-0 sudo[211126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:05 compute-0 python3.9[211128]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325044.4371088-775-246016588746173/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:05 compute-0 sudo[211126]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:06 compute-0 sudo[211278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfdgihrykdkmdevgunfrfrdezmnmicse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325045.9663014-775-148177564735614/AnsiballZ_stat.py'
Oct 01 13:24:06 compute-0 sudo[211278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:06 compute-0 python3.9[211280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:06 compute-0 sudo[211278]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:07 compute-0 sudo[211401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxgmcecselameanqrnrnghxjepksvtfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325045.9663014-775-148177564735614/AnsiballZ_copy.py'
Oct 01 13:24:07 compute-0 sudo[211401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:07 compute-0 python3.9[211403]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325045.9663014-775-148177564735614/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:07 compute-0 sudo[211401]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:07 compute-0 ceph-mon[74802]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:07 compute-0 sudo[211553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csimkgummaskpcsegvihimqkuaocbdur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325047.431467-775-144001800504323/AnsiballZ_stat.py'
Oct 01 13:24:07 compute-0 sudo[211553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:08 compute-0 python3.9[211555]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:08 compute-0 sudo[211553]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:08 compute-0 sudo[211676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsdqctliylfjexdmicvhtvlhnxohnffv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325047.431467-775-144001800504323/AnsiballZ_copy.py'
Oct 01 13:24:08 compute-0 sudo[211676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:08 compute-0 python3.9[211678]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325047.431467-775-144001800504323/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:08 compute-0 sudo[211676]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:09 compute-0 sudo[211839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyvvwgzzctqibyfhmbqexrnzxxukdndi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325048.9841897-775-18471621416948/AnsiballZ_stat.py'
Oct 01 13:24:09 compute-0 sudo[211839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:09 compute-0 ceph-mon[74802]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:09 compute-0 podman[211802]: 2025-10-01 13:24:09.467571425 +0000 UTC m=+0.133779136 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.build-date=20250923, container_name=ovn_controller)
Oct 01 13:24:09 compute-0 python3.9[211843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:09 compute-0 sudo[211839]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:10 compute-0 sudo[211977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzxnhefcdzhzrerbadxkzdsmcgtglxvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325048.9841897-775-18471621416948/AnsiballZ_copy.py'
Oct 01 13:24:10 compute-0 sudo[211977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:10 compute-0 python3.9[211979]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325048.9841897-775-18471621416948/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:10 compute-0 sudo[211977]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:10 compute-0 sudo[212129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tubdfrerixhxzjlsnxktvvqwuuvpajto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325050.587429-775-81249689093696/AnsiballZ_stat.py'
Oct 01 13:24:10 compute-0 sudo[212129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:11 compute-0 python3.9[212131]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:11 compute-0 sudo[212129]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:11 compute-0 ceph-mon[74802]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:11 compute-0 sudo[212252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aygclzbscqvmttmbyfdiwkpesxvieeec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325050.587429-775-81249689093696/AnsiballZ_copy.py'
Oct 01 13:24:11 compute-0 sudo[212252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:11 compute-0 python3.9[212254]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325050.587429-775-81249689093696/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:11 compute-0 sudo[212252]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:24:12.288 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:24:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:24:12.288 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:24:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:24:12.289 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:24:12 compute-0 python3.9[212404]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:24:13 compute-0 sudo[212557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gejdaioccnwwbsykovduheuenqhnksrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325052.7365227-981-244583881529474/AnsiballZ_seboolean.py'
Oct 01 13:24:13 compute-0 sudo[212557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:13 compute-0 python3.9[212559]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 01 13:24:13 compute-0 ceph-mon[74802]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:14 compute-0 dbus-broker-launch[786]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 01 13:24:14 compute-0 podman[212562]: 2025-10-01 13:24:14.549474695 +0000 UTC m=+0.085673811 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:24:14 compute-0 ceph-mon[74802]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:15 compute-0 sudo[212557]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:15 compute-0 sudo[212733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kztwuakbbzihaenbztwwsxrwkkylyfey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325055.3397615-989-219112389267199/AnsiballZ_copy.py'
Oct 01 13:24:15 compute-0 sudo[212733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:15 compute-0 python3.9[212735]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:15 compute-0 sudo[212733]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:16 compute-0 sudo[212885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykziyfcpcbhdjtwyzfqwfrixerhswhto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325056.0586674-989-121969969857484/AnsiballZ_copy.py'
Oct 01 13:24:16 compute-0 sudo[212885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:16 compute-0 python3.9[212887]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:16 compute-0 sudo[212885]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:17 compute-0 sudo[213037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pniqbyponvlobvnnfjicfxjfecyincuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325056.7569518-989-205689960617550/AnsiballZ_copy.py'
Oct 01 13:24:17 compute-0 sudo[213037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:17 compute-0 ceph-mon[74802]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:17 compute-0 python3.9[213039]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:17 compute-0 sudo[213037]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:24:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:24:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:24:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:24:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:24:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:24:17 compute-0 sudo[213189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsqwkqyguuuvnevvzvkeqwuobduiqpuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325057.5412874-989-75184638058958/AnsiballZ_copy.py'
Oct 01 13:24:17 compute-0 sudo[213189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:18 compute-0 python3.9[213191]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:18 compute-0 sudo[213189]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:18 compute-0 sudo[213341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vukmlcwsaekzzcqkmvkflznudiulfdqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325058.3103304-989-69984357046711/AnsiballZ_copy.py'
Oct 01 13:24:18 compute-0 sudo[213341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:18 compute-0 python3.9[213343]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:18 compute-0 sudo[213341]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:19 compute-0 ceph-mon[74802]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:19 compute-0 sudo[213493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhucnwhsrzbdcnkyzfazzdphqmqfeufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325059.1588826-1025-21353292223685/AnsiballZ_copy.py'
Oct 01 13:24:19 compute-0 sudo[213493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:19 compute-0 python3.9[213495]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:19 compute-0 sudo[213493]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:20 compute-0 sudo[213645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veoqakzwivybidkhxmqmnswwxyuhilur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325059.862374-1025-59401425408502/AnsiballZ_copy.py'
Oct 01 13:24:20 compute-0 sudo[213645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:20 compute-0 python3.9[213647]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:20 compute-0 sudo[213645]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:20 compute-0 sudo[213797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyrqyntwefvoimeqteowvokxiahuxcux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325060.561476-1025-19723531763834/AnsiballZ_copy.py'
Oct 01 13:24:20 compute-0 sudo[213797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:21 compute-0 python3.9[213799]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:21 compute-0 sudo[213797]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:21 compute-0 ceph-mon[74802]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:21 compute-0 sudo[213949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbkplubminyggvdonilanfftdpludkxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325061.2972438-1025-161027330578230/AnsiballZ_copy.py'
Oct 01 13:24:21 compute-0 sudo[213949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:21 compute-0 python3.9[213951]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:21 compute-0 sudo[213949]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:22 compute-0 sudo[214101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzvmxfsdqjxzsitcyoiolwrnlagdpvdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325062.150871-1025-167828859675048/AnsiballZ_copy.py'
Oct 01 13:24:22 compute-0 sudo[214101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:22 compute-0 python3.9[214103]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:22 compute-0 sudo[214101]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:23 compute-0 sudo[214253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzrtgeajsawgmcybyapkhsmygjthndvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325062.9562247-1061-36702618824938/AnsiballZ_systemd.py'
Oct 01 13:24:23 compute-0 sudo[214253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:23 compute-0 ceph-mon[74802]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:23 compute-0 python3.9[214255]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:24:23 compute-0 systemd[1]: Reloading.
Oct 01 13:24:23 compute-0 systemd-rc-local-generator[214281]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:24:23 compute-0 systemd-sysv-generator[214286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:24:24 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 01 13:24:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:24 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 01 13:24:24 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 01 13:24:24 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 01 13:24:24 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 01 13:24:24 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 01 13:24:24 compute-0 sudo[214253]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:24 compute-0 sudo[214447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwbpamviuazicorznbibhjkthtjchbgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325064.4737167-1061-206330597925402/AnsiballZ_systemd.py'
Oct 01 13:24:24 compute-0 sudo[214447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:25 compute-0 python3.9[214449]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:24:25 compute-0 systemd[1]: Reloading.
Oct 01 13:24:25 compute-0 systemd-rc-local-generator[214477]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:24:25 compute-0 systemd-sysv-generator[214482]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:24:25 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 01 13:24:25 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 01 13:24:25 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 01 13:24:25 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 01 13:24:25 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 01 13:24:25 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 01 13:24:25 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 01 13:24:25 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 01 13:24:25 compute-0 ceph-mon[74802]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:25 compute-0 sudo[214447]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:25 compute-0 sshd-session[214297]: Invalid user onlime_r from 80.94.95.116 port 21424
Oct 01 13:24:26 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 01 13:24:26 compute-0 sshd-session[214297]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:24:26 compute-0 sshd-session[214297]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.95.116
Oct 01 13:24:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:26 compute-0 sudo[214664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atjlaimyprjhunvzekhvyuvctwbnqstg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325065.9197881-1061-59159352051215/AnsiballZ_systemd.py'
Oct 01 13:24:26 compute-0 sudo[214664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:26 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 01 13:24:26 compute-0 python3.9[214666]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:24:26 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged.
Oct 01 13:24:26 compute-0 systemd[1]: Started dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 01 13:24:26 compute-0 systemd[1]: Reloading.
Oct 01 13:24:26 compute-0 systemd-rc-local-generator[214699]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:24:26 compute-0 systemd-sysv-generator[214702]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:24:27 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 01 13:24:27 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 01 13:24:27 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 01 13:24:27 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 01 13:24:27 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 01 13:24:27 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 01 13:24:27 compute-0 sudo[214664]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:27 compute-0 setroubleshoot[214594]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f4bd15bc-9fdd-45ca-9013-6e2ad0770344
Oct 01 13:24:27 compute-0 setroubleshoot[214594]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 01 13:24:27 compute-0 setroubleshoot[214594]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f4bd15bc-9fdd-45ca-9013-6e2ad0770344
Oct 01 13:24:27 compute-0 setroubleshoot[214594]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 01 13:24:27 compute-0 sudo[214882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qasusjlkypwsixhwoxepzdjuqchcblbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325067.345904-1061-267786299261230/AnsiballZ_systemd.py'
Oct 01 13:24:27 compute-0 sudo[214882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:27 compute-0 ceph-mon[74802]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:28 compute-0 python3.9[214884]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:24:28 compute-0 systemd[1]: Reloading.
Oct 01 13:24:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:28 compute-0 systemd-rc-local-generator[214913]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:24:28 compute-0 systemd-sysv-generator[214917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:24:28 compute-0 sshd-session[214297]: Failed password for invalid user onlime_r from 80.94.95.116 port 21424 ssh2
Oct 01 13:24:28 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 01 13:24:28 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 01 13:24:28 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 01 13:24:28 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 01 13:24:28 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 01 13:24:28 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 01 13:24:28 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 01 13:24:28 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 01 13:24:28 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 01 13:24:28 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 01 13:24:28 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 01 13:24:28 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 01 13:24:28 compute-0 sudo[214882]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:29 compute-0 sudo[215095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yishcphdxgqikprbnavkolandmzbtblu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325068.7238536-1061-113178763472142/AnsiballZ_systemd.py'
Oct 01 13:24:29 compute-0 sudo[215095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:29 compute-0 sshd-session[214297]: Connection closed by invalid user onlime_r 80.94.95.116 port 21424 [preauth]
Oct 01 13:24:29 compute-0 python3.9[215097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:24:29 compute-0 systemd[1]: Reloading.
Oct 01 13:24:29 compute-0 systemd-rc-local-generator[215118]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:24:29 compute-0 systemd-sysv-generator[215124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:24:29 compute-0 ceph-mon[74802]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:30 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 01 13:24:30 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 01 13:24:30 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 01 13:24:30 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 01 13:24:30 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 01 13:24:30 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 01 13:24:30 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 01 13:24:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:30 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 01 13:24:30 compute-0 sudo[215095]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:30 compute-0 sudo[215304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzuusznjtfnjjgrzmygqmtmlvrptdhjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325070.4929497-1098-116846943613063/AnsiballZ_file.py'
Oct 01 13:24:30 compute-0 sudo[215304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:31 compute-0 ceph-mon[74802]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:31 compute-0 python3.9[215306]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:31 compute-0 sudo[215304]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:31 compute-0 sudo[215456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knfhcenecrtzohwsmervqovtuiwxjupg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325071.361985-1106-83892557761581/AnsiballZ_find.py'
Oct 01 13:24:31 compute-0 sudo[215456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:31 compute-0 python3.9[215458]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 13:24:32 compute-0 sudo[215456]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:32 compute-0 sudo[215608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhbzhkqnmuhriiycculnhhwcbqpsjuhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325072.217076-1114-256726190738607/AnsiballZ_command.py'
Oct 01 13:24:32 compute-0 sudo[215608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:32 compute-0 python3.9[215610]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:24:32 compute-0 sudo[215608]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:33 compute-0 ceph-mon[74802]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:33 compute-0 python3.9[215764]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 13:24:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:34 compute-0 python3.9[215914]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:35 compute-0 ceph-mon[74802]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:35 compute-0 python3.9[216035]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325074.1762452-1133-40952870331326/.source.xml follow=False _original_basename=secret.xml.j2 checksum=85ea94ee6dc7b38556452772c4b1cde316396f1e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:36 compute-0 sudo[216185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miumkzrsfaabdkjexhvjxjaobkqtynjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325075.8108604-1148-53262375618420/AnsiballZ_command.py'
Oct 01 13:24:36 compute-0 sudo[216185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:36 compute-0 python3.9[216187]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine eb4b6ead-01d1-53b3-a52a-47dcc600555f
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:24:36 compute-0 polkitd[6665]: Registered Authentication Agent for unix-process:216189:763722 (system bus name :1.2987 [/usr/bin/pkttyagent --process 216189 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 01 13:24:36 compute-0 polkitd[6665]: Unregistered Authentication Agent for unix-process:216189:763722 (system bus name :1.2987, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 01 13:24:36 compute-0 polkitd[6665]: Registered Authentication Agent for unix-process:216188:763721 (system bus name :1.2988 [/usr/bin/pkttyagent --process 216188 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 01 13:24:36 compute-0 polkitd[6665]: Unregistered Authentication Agent for unix-process:216188:763721 (system bus name :1.2988, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 01 13:24:36 compute-0 sudo[216185]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:37 compute-0 python3.9[216349]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:37 compute-0 ceph-mon[74802]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:37 compute-0 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 01 13:24:37 compute-0 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.046s CPU time.
Oct 01 13:24:37 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 01 13:24:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:38 compute-0 sudo[216499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uakmlotlnoodvoxeygjfodkmuwrejgqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325077.7718422-1164-156339367517349/AnsiballZ_command.py'
Oct 01 13:24:38 compute-0 sudo[216499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:38 compute-0 sudo[216499]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:39 compute-0 sudo[216652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoahjffasuxpkxodllxrgfzlqnvyebnc ; FSID=eb4b6ead-01d1-53b3-a52a-47dcc600555f KEY=AQCSJ91oAAAAABAAnrq6Xzc1a2WsnMS+ZR1nnw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325078.7628362-1172-251093900361611/AnsiballZ_command.py'
Oct 01 13:24:39 compute-0 sudo[216652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:39 compute-0 polkitd[6665]: Registered Authentication Agent for unix-process:216655:764010 (system bus name :1.2991 [/usr/bin/pkttyagent --process 216655 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 01 13:24:39 compute-0 polkitd[6665]: Unregistered Authentication Agent for unix-process:216655:764010 (system bus name :1.2991, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 01 13:24:39 compute-0 sudo[216652]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:39 compute-0 ceph-mon[74802]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:40 compute-0 sudo[216820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkhjmpnowuswiepvuvrawnxzigumrwbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325080.0176344-1180-59903655432653/AnsiballZ_copy.py'
Oct 01 13:24:40 compute-0 sudo[216820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:40 compute-0 podman[216784]: 2025-10-01 13:24:40.491667193 +0000 UTC m=+0.162236002 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 13:24:40 compute-0 python3.9[216830]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:40 compute-0 sudo[216820]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:40 compute-0 ceph-mon[74802]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:41 compute-0 sudo[216988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qezzalkebeuybpadjwcxgphtzocmhnbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325080.856165-1188-101214582124983/AnsiballZ_stat.py'
Oct 01 13:24:41 compute-0 sudo[216988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:41 compute-0 python3.9[216990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:41 compute-0 sudo[216988]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:41 compute-0 sudo[217111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryyndabfvxidhztqpgeqcarkhjwxrjdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325080.856165-1188-101214582124983/AnsiballZ_copy.py'
Oct 01 13:24:41 compute-0 sudo[217111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:42 compute-0 python3.9[217113]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325080.856165-1188-101214582124983/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:42 compute-0 sudo[217111]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:42 compute-0 sudo[217263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmewlgvrsxhjgomxsbshsyvjsjqgqqmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325082.5354893-1204-262706857077063/AnsiballZ_file.py'
Oct 01 13:24:42 compute-0 sudo[217263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:43 compute-0 python3.9[217265]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:43 compute-0 sudo[217263]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:43 compute-0 ceph-mon[74802]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:43 compute-0 sudo[217417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwdzcqqnjvnxtxndcmfxysmvaaiucrkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325083.4318829-1212-113741989113658/AnsiballZ_stat.py'
Oct 01 13:24:43 compute-0 sudo[217417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:44 compute-0 python3.9[217419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:44 compute-0 sudo[217417]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:44 compute-0 sshd-session[217365]: Invalid user ubuntu from 80.253.31.232 port 54856
Oct 01 13:24:44 compute-0 sshd-session[217365]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:24:44 compute-0 sshd-session[217365]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:24:44 compute-0 sudo[217495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbekvswevsbfiqbggxymsjufoumanezy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325083.4318829-1212-113741989113658/AnsiballZ_file.py'
Oct 01 13:24:44 compute-0 sudo[217495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:44 compute-0 python3.9[217497]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:44 compute-0 sudo[217495]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:45 compute-0 sudo[217660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sibqviweypubktmdzqkfgcenbowxuyfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325084.8869965-1224-197924906639828/AnsiballZ_stat.py'
Oct 01 13:24:45 compute-0 sudo[217660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:45 compute-0 ceph-mon[74802]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:45 compute-0 podman[217621]: 2025-10-01 13:24:45.336328223 +0000 UTC m=+0.095171794 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Oct 01 13:24:45 compute-0 python3.9[217666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:45 compute-0 sudo[217660]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:45 compute-0 sudo[217745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awoqvdjgievvwobqwqiyxykcyirnqwxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325084.8869965-1224-197924906639828/AnsiballZ_file.py'
Oct 01 13:24:45 compute-0 sudo[217745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:46 compute-0 python3.9[217747]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9rmonhe4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:46 compute-0 sudo[217745]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:46 compute-0 sshd-session[217365]: Failed password for invalid user ubuntu from 80.253.31.232 port 54856 ssh2
Oct 01 13:24:46 compute-0 sudo[217899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gouhguhdlcuvypgdmtfruhacaajkcayc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325086.3245978-1236-166023815504011/AnsiballZ_stat.py'
Oct 01 13:24:46 compute-0 sudo[217899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:46 compute-0 unix_chkpwd[217902]: password check failed for user (root)
Oct 01 13:24:46 compute-0 sshd-session[217748]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139  user=root
Oct 01 13:24:46 compute-0 python3.9[217901]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:46 compute-0 sudo[217899]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:47 compute-0 sudo[217980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikshsxgvaxyfunzhpwgqtgqhxrxxaext ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325086.3245978-1236-166023815504011/AnsiballZ_file.py'
Oct 01 13:24:47 compute-0 sudo[217980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:47 compute-0 sshd-session[217365]: Received disconnect from 80.253.31.232 port 54856:11: Bye Bye [preauth]
Oct 01 13:24:47 compute-0 sshd-session[217365]: Disconnected from invalid user ubuntu 80.253.31.232 port 54856 [preauth]
Oct 01 13:24:47 compute-0 ceph-mon[74802]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:47 compute-0 python3.9[217982]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:47 compute-0 sudo[217980]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:24:47
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control']
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:24:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:24:48 compute-0 sudo[218132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szbfolvicqvcytjmpniieofjcnvyaojk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325087.6630228-1249-96390457338530/AnsiballZ_command.py'
Oct 01 13:24:48 compute-0 sudo[218132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:48 compute-0 sshd-session[217748]: Failed password for root from 200.7.101.139 port 59504 ssh2
Oct 01 13:24:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:48 compute-0 python3.9[218134]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:24:48 compute-0 sudo[218132]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:48 compute-0 sshd-session[217952]: Invalid user uploader from 27.254.137.144 port 38286
Oct 01 13:24:48 compute-0 sshd-session[217952]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:24:48 compute-0 sshd-session[217952]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:24:48 compute-0 sshd-session[217748]: Received disconnect from 200.7.101.139 port 59504:11: Bye Bye [preauth]
Oct 01 13:24:48 compute-0 sshd-session[217748]: Disconnected from authenticating user root 200.7.101.139 port 59504 [preauth]
Oct 01 13:24:48 compute-0 sudo[218285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhyuzhxldeetwzepbgrmzuhxvnofubfn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759325088.472285-1257-264325061899927/AnsiballZ_edpm_nftables_from_files.py'
Oct 01 13:24:48 compute-0 sudo[218285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:49 compute-0 python3[218287]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 01 13:24:49 compute-0 sudo[218285]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:49 compute-0 ceph-mon[74802]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:49 compute-0 sudo[218437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqqldthajdwuvuctxniqrhxtfkupzstg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325089.4523761-1265-23865844716943/AnsiballZ_stat.py'
Oct 01 13:24:49 compute-0 sudo[218437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:49 compute-0 python3.9[218439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:49 compute-0 sudo[218437]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:50 compute-0 sshd-session[217952]: Failed password for invalid user uploader from 27.254.137.144 port 38286 ssh2
Oct 01 13:24:50 compute-0 sudo[218515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eefejpqhavtihtavxatfjfjcttzaaqub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325089.4523761-1265-23865844716943/AnsiballZ_file.py'
Oct 01 13:24:50 compute-0 sudo[218515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:50 compute-0 python3.9[218517]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:50 compute-0 sudo[218515]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:50 compute-0 sshd-session[217952]: Received disconnect from 27.254.137.144 port 38286:11: Bye Bye [preauth]
Oct 01 13:24:50 compute-0 sshd-session[217952]: Disconnected from invalid user uploader 27.254.137.144 port 38286 [preauth]
Oct 01 13:24:50 compute-0 sudo[218667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptrrlyomiktbasbxdvfdzjorhjvfbdut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325090.5875242-1277-207531552893384/AnsiballZ_stat.py'
Oct 01 13:24:50 compute-0 sudo[218667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:51 compute-0 python3.9[218669]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:51 compute-0 sudo[218667]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:51 compute-0 sudo[218745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvehctpxtwheswfidofhujcpreqebehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325090.5875242-1277-207531552893384/AnsiballZ_file.py'
Oct 01 13:24:51 compute-0 sudo[218745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:51 compute-0 python3.9[218747]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:51 compute-0 ceph-mon[74802]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:51 compute-0 sudo[218745]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:52 compute-0 sudo[218897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icuqbccenreakrrjyfxixlaxmrevtgwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325091.77747-1289-18157471798727/AnsiballZ_stat.py'
Oct 01 13:24:52 compute-0 sudo[218897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:52 compute-0 python3.9[218899]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:52 compute-0 sudo[218897]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:52 compute-0 sudo[218975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piqyfblkwdvbohwtbjynuztjfiewrusl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325091.77747-1289-18157471798727/AnsiballZ_file.py'
Oct 01 13:24:52 compute-0 sudo[218975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:52 compute-0 python3.9[218977]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:52 compute-0 sudo[218975]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:53 compute-0 sudo[219127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyhvouhmntdodzhxavgfvgewwwuoaziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325093.0252469-1301-227610929975800/AnsiballZ_stat.py'
Oct 01 13:24:53 compute-0 sudo[219127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:53 compute-0 python3.9[219129]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:53 compute-0 ceph-mon[74802]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:53 compute-0 sudo[219127]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:53 compute-0 sudo[219205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsmjtsqsejxexbcqwbmnmudwojmsvxdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325093.0252469-1301-227610929975800/AnsiballZ_file.py'
Oct 01 13:24:53 compute-0 sudo[219205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:54 compute-0 python3.9[219207]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:54 compute-0 sudo[219205]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:54 compute-0 sudo[219357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeyurfmfzmwlicltmlaldzhzdffyanct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325094.3993096-1313-263158143723684/AnsiballZ_stat.py'
Oct 01 13:24:54 compute-0 sudo[219357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:55 compute-0 python3.9[219359]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:24:55 compute-0 sudo[219357]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:55 compute-0 ceph-mon[74802]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:55 compute-0 sudo[219482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfszjcydeboxmruclyglafwuztizjovg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325094.3993096-1313-263158143723684/AnsiballZ_copy.py'
Oct 01 13:24:55 compute-0 sudo[219482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:56 compute-0 python3.9[219484]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759325094.3993096-1313-263158143723684/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:56 compute-0 sudo[219482]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:24:56 compute-0 sudo[219634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwxrevdkzvxnxgretccsnfiqypfkkjlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325096.2607553-1328-267036327110793/AnsiballZ_file.py'
Oct 01 13:24:56 compute-0 sudo[219634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:56 compute-0 python3.9[219636]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:56 compute-0 sudo[219634]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:24:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:24:57 compute-0 sudo[219786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uquywgwlazwkzvygmrdhyvgtwdnoljel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325097.0756476-1336-14165828245865/AnsiballZ_command.py'
Oct 01 13:24:57 compute-0 sudo[219786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:57 compute-0 ceph-mon[74802]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:57 compute-0 python3.9[219788]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:24:57 compute-0 sudo[219786]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:58 compute-0 sudo[219941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iayelxvzopwksjmcyviqramjbbrngqnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325097.8017213-1344-92068790337571/AnsiballZ_blockinfile.py'
Oct 01 13:24:58 compute-0 sudo[219941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:58 compute-0 python3.9[219943]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:24:58 compute-0 sudo[219941]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:59 compute-0 sudo[220093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrlmgcnanojoktaosimixadjpqzfmqdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325098.8023272-1353-270557763352467/AnsiballZ_command.py'
Oct 01 13:24:59 compute-0 sudo[220093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:24:59 compute-0 python3.9[220095]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:24:59 compute-0 ceph-mon[74802]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:24:59 compute-0 sudo[220093]: pam_unix(sudo:session): session closed for user root
Oct 01 13:24:59 compute-0 sudo[220246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhezynssbwsqpubuxvsoiillxcnjudd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325099.6336353-1361-119782078183741/AnsiballZ_stat.py'
Oct 01 13:24:59 compute-0 sudo[220246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:00 compute-0 sudo[220247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:25:00 compute-0 sudo[220247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:00 compute-0 sudo[220247]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:00 compute-0 sudo[220274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:25:00 compute-0 sudo[220274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:00 compute-0 sudo[220274]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:00 compute-0 sudo[220299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:25:00 compute-0 sudo[220299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:00 compute-0 sudo[220299]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:00 compute-0 python3.9[220254]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:25:00 compute-0 sudo[220246]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:00 compute-0 sudo[220324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:25:00 compute-0 sudo[220324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:00 compute-0 sudo[220531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knohdfvjvpiotcuyzjjcenrxwkjjkzyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325100.3630588-1369-221016080988331/AnsiballZ_command.py'
Oct 01 13:25:00 compute-0 sudo[220324]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:00 compute-0 sudo[220531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:25:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:25:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:25:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:25:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:25:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:25:00 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e78f39d1-32f4-497a-9a9c-dbfdb2b3a7a1 does not exist
Oct 01 13:25:00 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev d8eadeab-e9e9-4c2a-9312-eb9e4ab786cf does not exist
Oct 01 13:25:00 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e0279758-6973-40c8-883b-eaa5a9ede712 does not exist
Oct 01 13:25:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:25:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:25:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:25:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:25:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:25:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:25:00 compute-0 sudo[220534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:25:00 compute-0 sudo[220534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:00 compute-0 sudo[220534]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:00 compute-0 python3.9[220533]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:25:00 compute-0 sudo[220559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:25:00 compute-0 sudo[220559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:00 compute-0 sudo[220559]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:00 compute-0 sudo[220531]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:00 compute-0 sudo[220587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:25:00 compute-0 sudo[220587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:00 compute-0 sudo[220587]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:01 compute-0 sudo[220634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:25:01 compute-0 sudo[220634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:01 compute-0 ceph-mon[74802]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:25:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:25:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:25:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:25:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:25:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:25:01 compute-0 podman[220800]: 2025-10-01 13:25:01.456554393 +0000 UTC m=+0.045794586 container create ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:25:01 compute-0 systemd[1]: Started libpod-conmon-ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e.scope.
Oct 01 13:25:01 compute-0 sudo[220842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gulakascvqfldsbesouebfnheiyhnroy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325101.1299815-1377-138621535664896/AnsiballZ_file.py'
Oct 01 13:25:01 compute-0 sudo[220842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:01 compute-0 podman[220800]: 2025-10-01 13:25:01.433537387 +0000 UTC m=+0.022777580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:25:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:25:01 compute-0 podman[220800]: 2025-10-01 13:25:01.56724107 +0000 UTC m=+0.156481283 container init ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:25:01 compute-0 podman[220800]: 2025-10-01 13:25:01.576642172 +0000 UTC m=+0.165882375 container start ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:25:01 compute-0 flamboyant_shamir[220844]: 167 167
Oct 01 13:25:01 compute-0 systemd[1]: libpod-ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e.scope: Deactivated successfully.
Oct 01 13:25:01 compute-0 podman[220800]: 2025-10-01 13:25:01.589323048 +0000 UTC m=+0.178563241 container attach ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:25:01 compute-0 podman[220800]: 2025-10-01 13:25:01.590018059 +0000 UTC m=+0.179258272 container died ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-52eeabc56561c559152fcc704647706c67815a872f4e2d5dc577305f67b8efe0-merged.mount: Deactivated successfully.
Oct 01 13:25:01 compute-0 python3.9[220846]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:01 compute-0 sudo[220842]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:01 compute-0 podman[220800]: 2025-10-01 13:25:01.745965395 +0000 UTC m=+0.335205588 container remove ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:25:01 compute-0 systemd[1]: libpod-conmon-ea8d4e0d6eba30cd6cb1a11d2db298e86ac924d5eda1f7e1cf440481c58a6f8e.scope: Deactivated successfully.
Oct 01 13:25:01 compute-0 podman[220897]: 2025-10-01 13:25:01.931565614 +0000 UTC m=+0.056633255 container create 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:25:01 compute-0 systemd[1]: Started libpod-conmon-5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297.scope.
Oct 01 13:25:02 compute-0 podman[220897]: 2025-10-01 13:25:01.908623509 +0000 UTC m=+0.033691160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:25:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:02 compute-0 podman[220897]: 2025-10-01 13:25:02.04547194 +0000 UTC m=+0.170539601 container init 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:25:02 compute-0 podman[220897]: 2025-10-01 13:25:02.065805503 +0000 UTC m=+0.190873124 container start 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:25:02 compute-0 podman[220897]: 2025-10-01 13:25:02.069973992 +0000 UTC m=+0.195041653 container attach 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:25:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:02 compute-0 sudo[221041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riqyhqwcdyimkjxnuqvehwpwqpiebilh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325101.9024787-1385-123753754337344/AnsiballZ_stat.py'
Oct 01 13:25:02 compute-0 sudo[221041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:02 compute-0 python3.9[221043]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:02 compute-0 sudo[221041]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:02 compute-0 sudo[221164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldsfpzekzdjergefctfyqyzpluhgcjtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325101.9024787-1385-123753754337344/AnsiballZ_copy.py'
Oct 01 13:25:02 compute-0 sudo[221164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:03 compute-0 python3.9[221168]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325101.9024787-1385-123753754337344/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:03 compute-0 sudo[221164]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:03 compute-0 cranky_dirac[220963]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:25:03 compute-0 cranky_dirac[220963]: --> relative data size: 1.0
Oct 01 13:25:03 compute-0 cranky_dirac[220963]: --> All data devices are unavailable
Oct 01 13:25:03 compute-0 systemd[1]: libpod-5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297.scope: Deactivated successfully.
Oct 01 13:25:03 compute-0 systemd[1]: libpod-5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297.scope: Consumed 1.172s CPU time.
Oct 01 13:25:03 compute-0 podman[220897]: 2025-10-01 13:25:03.29551163 +0000 UTC m=+1.420579261 container died 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3f26ebafb0645b209b006fbcdf79dcb27d84768e8d2b19ab441f430d2dd6c24-merged.mount: Deactivated successfully.
Oct 01 13:25:03 compute-0 podman[220897]: 2025-10-01 13:25:03.393231983 +0000 UTC m=+1.518299624 container remove 5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:25:03 compute-0 systemd[1]: libpod-conmon-5158c3f9a39a129786e1c1728316fd88a06318bcd512eac4e464e44e2b229297.scope: Deactivated successfully.
Oct 01 13:25:03 compute-0 sudo[220634]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:03 compute-0 ceph-mon[74802]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:03 compute-0 sudo[221311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:25:03 compute-0 sudo[221311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:03 compute-0 sudo[221311]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:03 compute-0 sudo[221396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpapdfawzcpigeedzobdwfnmkyvzfcdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325103.2456582-1400-223056748277787/AnsiballZ_stat.py'
Oct 01 13:25:03 compute-0 sudo[221396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:03 compute-0 sudo[221363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:25:03 compute-0 sudo[221363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:03 compute-0 sudo[221363]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:03 compute-0 sudo[221406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:25:03 compute-0 sudo[221406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:03 compute-0 sudo[221406]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:03 compute-0 sudo[221431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:25:03 compute-0 sudo[221431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:03 compute-0 python3.9[221403]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:03 compute-0 sudo[221396]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:04 compute-0 podman[221576]: 2025-10-01 13:25:04.154557898 +0000 UTC m=+0.078397412 container create 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:25:04 compute-0 sudo[221630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzrbhgczlediyedauftjtvwdollgggal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325103.2456582-1400-223056748277787/AnsiballZ_copy.py'
Oct 01 13:25:04 compute-0 sudo[221630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:04 compute-0 systemd[1]: Started libpod-conmon-21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069.scope.
Oct 01 13:25:04 compute-0 podman[221576]: 2025-10-01 13:25:04.116186632 +0000 UTC m=+0.040026166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:25:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:25:04 compute-0 podman[221576]: 2025-10-01 13:25:04.288335173 +0000 UTC m=+0.212174767 container init 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:25:04 compute-0 podman[221576]: 2025-10-01 13:25:04.302019358 +0000 UTC m=+0.225858892 container start 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:25:04 compute-0 friendly_mendeleev[221635]: 167 167
Oct 01 13:25:04 compute-0 podman[221576]: 2025-10-01 13:25:04.310249285 +0000 UTC m=+0.234088829 container attach 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:25:04 compute-0 systemd[1]: libpod-21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069.scope: Deactivated successfully.
Oct 01 13:25:04 compute-0 podman[221576]: 2025-10-01 13:25:04.311353419 +0000 UTC m=+0.235192953 container died 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-09e57012ea0df3ddfb328834bf7d0b09e6541bf6e4f588f79af1fdd89e626850-merged.mount: Deactivated successfully.
Oct 01 13:25:04 compute-0 podman[221576]: 2025-10-01 13:25:04.379005655 +0000 UTC m=+0.302845199 container remove 21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:25:04 compute-0 python3.9[221634]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325103.2456582-1400-223056748277787/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:04 compute-0 systemd[1]: libpod-conmon-21fcf947ac9df5189fcd903515ab8711157edab4c26cff0dab2604957f329069.scope: Deactivated successfully.
Oct 01 13:25:04 compute-0 sudo[221630]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:04 compute-0 podman[221686]: 2025-10-01 13:25:04.696204931 +0000 UTC m=+0.120818353 container create 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:25:04 compute-0 podman[221686]: 2025-10-01 13:25:04.62067423 +0000 UTC m=+0.045287702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:25:04 compute-0 systemd[1]: Started libpod-conmon-31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be.scope.
Oct 01 13:25:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:04 compute-0 podman[221686]: 2025-10-01 13:25:04.942908973 +0000 UTC m=+0.367522435 container init 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:25:04 compute-0 podman[221686]: 2025-10-01 13:25:04.959976454 +0000 UTC m=+0.384589836 container start 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:25:04 compute-0 podman[221686]: 2025-10-01 13:25:04.964185425 +0000 UTC m=+0.388798907 container attach 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct 01 13:25:04 compute-0 sudo[221832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhusxjuaohrayhovmkympqzdvnhcrifj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325104.6157143-1415-157791203764164/AnsiballZ_stat.py'
Oct 01 13:25:05 compute-0 sudo[221832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:05 compute-0 python3.9[221834]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:05 compute-0 sudo[221832]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:05 compute-0 ceph-mon[74802]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:05 compute-0 sudo[221955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxyrujdbjcljewkecziexsditjgfhmcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325104.6157143-1415-157791203764164/AnsiballZ_copy.py'
Oct 01 13:25:05 compute-0 sudo[221955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:05 compute-0 confident_nash[221800]: {
Oct 01 13:25:05 compute-0 confident_nash[221800]:     "0": [
Oct 01 13:25:05 compute-0 confident_nash[221800]:         {
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "devices": [
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "/dev/loop3"
Oct 01 13:25:05 compute-0 confident_nash[221800]:             ],
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_name": "ceph_lv0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_size": "21470642176",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "name": "ceph_lv0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "tags": {
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.cluster_name": "ceph",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.crush_device_class": "",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.encrypted": "0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.osd_id": "0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.type": "block",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.vdo": "0"
Oct 01 13:25:05 compute-0 confident_nash[221800]:             },
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "type": "block",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "vg_name": "ceph_vg0"
Oct 01 13:25:05 compute-0 confident_nash[221800]:         }
Oct 01 13:25:05 compute-0 confident_nash[221800]:     ],
Oct 01 13:25:05 compute-0 confident_nash[221800]:     "1": [
Oct 01 13:25:05 compute-0 confident_nash[221800]:         {
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "devices": [
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "/dev/loop4"
Oct 01 13:25:05 compute-0 confident_nash[221800]:             ],
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_name": "ceph_lv1",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_size": "21470642176",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "name": "ceph_lv1",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "tags": {
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.cluster_name": "ceph",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.crush_device_class": "",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.encrypted": "0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.osd_id": "1",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.type": "block",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.vdo": "0"
Oct 01 13:25:05 compute-0 confident_nash[221800]:             },
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "type": "block",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "vg_name": "ceph_vg1"
Oct 01 13:25:05 compute-0 confident_nash[221800]:         }
Oct 01 13:25:05 compute-0 confident_nash[221800]:     ],
Oct 01 13:25:05 compute-0 confident_nash[221800]:     "2": [
Oct 01 13:25:05 compute-0 confident_nash[221800]:         {
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "devices": [
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "/dev/loop5"
Oct 01 13:25:05 compute-0 confident_nash[221800]:             ],
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_name": "ceph_lv2",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_size": "21470642176",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "name": "ceph_lv2",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "tags": {
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.cluster_name": "ceph",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.crush_device_class": "",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.encrypted": "0",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.osd_id": "2",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.type": "block",
Oct 01 13:25:05 compute-0 confident_nash[221800]:                 "ceph.vdo": "0"
Oct 01 13:25:05 compute-0 confident_nash[221800]:             },
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "type": "block",
Oct 01 13:25:05 compute-0 confident_nash[221800]:             "vg_name": "ceph_vg2"
Oct 01 13:25:05 compute-0 confident_nash[221800]:         }
Oct 01 13:25:05 compute-0 confident_nash[221800]:     ]
Oct 01 13:25:05 compute-0 confident_nash[221800]: }
Oct 01 13:25:05 compute-0 systemd[1]: libpod-31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be.scope: Deactivated successfully.
Oct 01 13:25:05 compute-0 podman[221686]: 2025-10-01 13:25:05.760670144 +0000 UTC m=+1.185283536 container died 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f6fdf59351adc23771d510da21f55145ddad56b30d66904253173a47bf1d333-merged.mount: Deactivated successfully.
Oct 01 13:25:05 compute-0 python3.9[221958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325104.6157143-1415-157791203764164/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:05 compute-0 podman[221686]: 2025-10-01 13:25:05.849871611 +0000 UTC m=+1.274485003 container remove 31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:25:05 compute-0 sudo[221955]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:05 compute-0 systemd[1]: libpod-conmon-31a6bec20cb6922eedbb3d5dd46f58341daf3e2ca346b4e310681033f10f12be.scope: Deactivated successfully.
Oct 01 13:25:05 compute-0 sudo[221431]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:05 compute-0 sudo[221996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:25:05 compute-0 sudo[221996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:05 compute-0 sudo[221996]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:06 compute-0 sudo[222027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:25:06 compute-0 sudo[222027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:06 compute-0 sudo[222027]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:06 compute-0 sudo[222078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:25:06 compute-0 sudo[222078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:06 compute-0 sudo[222078]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:06 compute-0 sudo[222127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:25:06 compute-0 sudo[222127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:06 compute-0 sudo[222232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffnhorcfpzhjabvhohbjwlvazxoccuhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325106.0428054-1430-5578199702004/AnsiballZ_systemd.py'
Oct 01 13:25:06 compute-0 sudo[222232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:06 compute-0 podman[222269]: 2025-10-01 13:25:06.634846502 +0000 UTC m=+0.093541774 container create 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 13:25:06 compute-0 podman[222269]: 2025-10-01 13:25:06.572365887 +0000 UTC m=+0.031061189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:25:06 compute-0 systemd[1]: Started libpod-conmon-198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392.scope.
Oct 01 13:25:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:25:06 compute-0 python3.9[222241]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:25:06 compute-0 podman[222269]: 2025-10-01 13:25:06.75008341 +0000 UTC m=+0.208778792 container init 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:25:06 compute-0 podman[222269]: 2025-10-01 13:25:06.76135858 +0000 UTC m=+0.220053882 container start 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 13:25:06 compute-0 systemd[1]: Reloading.
Oct 01 13:25:06 compute-0 inspiring_satoshi[222285]: 167 167
Oct 01 13:25:06 compute-0 podman[222269]: 2025-10-01 13:25:06.784357107 +0000 UTC m=+0.243052389 container attach 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:25:06 compute-0 podman[222269]: 2025-10-01 13:25:06.786122312 +0000 UTC m=+0.244817574 container died 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 13:25:06 compute-0 systemd-sysv-generator[222336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:25:06 compute-0 systemd-rc-local-generator[222333]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:25:07 compute-0 systemd[1]: libpod-198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392.scope: Deactivated successfully.
Oct 01 13:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-31fd494f8f7426187340c67ae2f8b34000f993f3dd2e092a6a5f2b929ae5f32a-merged.mount: Deactivated successfully.
Oct 01 13:25:07 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 01 13:25:07 compute-0 sudo[222232]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:07 compute-0 podman[222269]: 2025-10-01 13:25:07.262062931 +0000 UTC m=+0.720758223 container remove 198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_satoshi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:25:07 compute-0 systemd[1]: libpod-conmon-198561a3c1c23ff6d860d514d58f1dd84ab5b1aa689d31dd3d226869570f2392.scope: Deactivated successfully.
Oct 01 13:25:07 compute-0 unix_chkpwd[222428]: password check failed for user (root)
Oct 01 13:25:07 compute-0 sshd-session[222306]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46  user=root
Oct 01 13:25:07 compute-0 podman[222405]: 2025-10-01 13:25:07.49713299 +0000 UTC m=+0.048961856 container create bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:25:07 compute-0 systemd[1]: Started libpod-conmon-bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df.scope.
Oct 01 13:25:07 compute-0 podman[222405]: 2025-10-01 13:25:07.47690168 +0000 UTC m=+0.028730596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:25:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:25:07 compute-0 ceph-mon[74802]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:07 compute-0 podman[222405]: 2025-10-01 13:25:07.642341341 +0000 UTC m=+0.194170227 container init bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:25:07 compute-0 podman[222405]: 2025-10-01 13:25:07.656587514 +0000 UTC m=+0.208416410 container start bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:25:07 compute-0 podman[222405]: 2025-10-01 13:25:07.674162261 +0000 UTC m=+0.225991147 container attach bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:25:07 compute-0 sudo[222525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzhpxetdblmznkbhckgilsizhmhmpocw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325107.4120233-1438-123796605795012/AnsiballZ_systemd.py'
Oct 01 13:25:07 compute-0 sudo[222525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:08 compute-0 python3.9[222527]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 01 13:25:08 compute-0 systemd[1]: Reloading.
Oct 01 13:25:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:08 compute-0 systemd-sysv-generator[222556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:25:08 compute-0 systemd-rc-local-generator[222553]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:25:08 compute-0 systemd[1]: Reloading.
Oct 01 13:25:08 compute-0 systemd-rc-local-generator[222611]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:25:08 compute-0 systemd-sysv-generator[222621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:25:08 compute-0 kind_villani[222470]: {
Oct 01 13:25:08 compute-0 kind_villani[222470]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "osd_id": 0,
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "type": "bluestore"
Oct 01 13:25:08 compute-0 kind_villani[222470]:     },
Oct 01 13:25:08 compute-0 kind_villani[222470]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "osd_id": 2,
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "type": "bluestore"
Oct 01 13:25:08 compute-0 kind_villani[222470]:     },
Oct 01 13:25:08 compute-0 kind_villani[222470]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "osd_id": 1,
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:25:08 compute-0 kind_villani[222470]:         "type": "bluestore"
Oct 01 13:25:08 compute-0 kind_villani[222470]:     }
Oct 01 13:25:08 compute-0 kind_villani[222470]: }
Oct 01 13:25:08 compute-0 podman[222405]: 2025-10-01 13:25:08.682547968 +0000 UTC m=+1.234376844 container died bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:25:08 compute-0 systemd[1]: libpod-bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df.scope: Deactivated successfully.
Oct 01 13:25:08 compute-0 systemd[1]: libpod-bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df.scope: Consumed 1.015s CPU time.
Oct 01 13:25:08 compute-0 sudo[222525]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1f093f463a2fd5f07dade0837774f2e12c6d31475aead7629fb5af719e9a766-merged.mount: Deactivated successfully.
Oct 01 13:25:08 compute-0 podman[222405]: 2025-10-01 13:25:08.880208102 +0000 UTC m=+1.432036968 container remove bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_villani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:25:08 compute-0 systemd[1]: libpod-conmon-bee4c41a32eb946243966542b5b57e644627696c33b8764087853bc811fc53df.scope: Deactivated successfully.
Oct 01 13:25:08 compute-0 sudo[222127]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:25:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:25:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:25:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:25:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev cbd72fd2-90a0-4200-9192-73c4ab65af6c does not exist
Oct 01 13:25:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3567beec-00f1-428b-bee5-f5b74134a67e does not exist
Oct 01 13:25:09 compute-0 sudo[222667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:25:09 compute-0 sudo[222667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:09 compute-0 sudo[222667]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:09 compute-0 sudo[222692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:25:09 compute-0 sudo[222692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:25:09 compute-0 sudo[222692]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:09 compute-0 sshd-session[162502]: Connection closed by 192.168.122.30 port 43770
Oct 01 13:25:09 compute-0 sshd-session[162499]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:25:09 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct 01 13:25:09 compute-0 systemd[1]: session-49.scope: Consumed 3min 44.937s CPU time.
Oct 01 13:25:09 compute-0 systemd-logind[818]: Session 49 logged out. Waiting for processes to exit.
Oct 01 13:25:09 compute-0 systemd-logind[818]: Removed session 49.
Oct 01 13:25:09 compute-0 sshd-session[222306]: Failed password for root from 156.236.31.46 port 44954 ssh2
Oct 01 13:25:09 compute-0 ceph-mon[74802]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:25:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:25:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:11 compute-0 sshd-session[222306]: Received disconnect from 156.236.31.46 port 44954:11: Bye Bye [preauth]
Oct 01 13:25:11 compute-0 sshd-session[222306]: Disconnected from authenticating user root 156.236.31.46 port 44954 [preauth]
Oct 01 13:25:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:11 compute-0 podman[222717]: 2025-10-01 13:25:11.567882333 +0000 UTC m=+0.114605169 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 13:25:11 compute-0 ceph-mon[74802]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:25:12.290 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:25:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:25:12.291 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:25:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:25:12.292 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:25:13 compute-0 ceph-mon[74802]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:14 compute-0 sshd-session[222744]: Accepted publickey for zuul from 192.168.122.30 port 40368 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:25:14 compute-0 systemd-logind[818]: New session 50 of user zuul.
Oct 01 13:25:14 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct 01 13:25:14 compute-0 sshd-session[222744]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:25:15 compute-0 podman[222800]: 2025-10-01 13:25:15.540647628 +0000 UTC m=+0.089527158 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 13:25:15 compute-0 ceph-mon[74802]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:16 compute-0 python3.9[222916]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:25:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:17 compute-0 sudo[223070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yurswiofxyaqlvbqsdbotcatyevjemim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325116.630701-34-71034588793618/AnsiballZ_file.py'
Oct 01 13:25:17 compute-0 sudo[223070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:17 compute-0 python3.9[223072]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:17 compute-0 sudo[223070]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:17 compute-0 ceph-mon[74802]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:17 compute-0 sudo[223222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghgbomosuhikgvbwfbaysjotznxmkkrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325117.5030055-34-59862918692287/AnsiballZ_file.py'
Oct 01 13:25:17 compute-0 sudo[223222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:25:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:25:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:25:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:25:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:25:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:25:17 compute-0 python3.9[223224]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:18 compute-0 sudo[223222]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:18 compute-0 sudo[223374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwpgzlljunhoisznzfxbcyelrumkqybr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325118.1657991-34-90183879243838/AnsiballZ_file.py'
Oct 01 13:25:18 compute-0 sudo[223374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:18 compute-0 python3.9[223376]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:18 compute-0 sudo[223374]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:19 compute-0 sudo[223526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asegkrewggmzztmzxixhcluuiaiirkxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325118.7800372-34-262104682010798/AnsiballZ_file.py'
Oct 01 13:25:19 compute-0 sudo[223526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:19 compute-0 python3.9[223528]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 13:25:19 compute-0 sudo[223526]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:19 compute-0 ceph-mon[74802]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:19 compute-0 sudo[223678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllanipgrhcjdtpoiliqzvdleunxnern ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325119.4786367-34-207201657396919/AnsiballZ_file.py'
Oct 01 13:25:19 compute-0 sudo[223678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:19 compute-0 python3.9[223680]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:19 compute-0 sudo[223678]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:20 compute-0 sudo[223830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmhgengpmaxxsedkoxprfbchyibgzmmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325120.1347632-70-22537710710412/AnsiballZ_stat.py'
Oct 01 13:25:20 compute-0 sudo[223830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:20 compute-0 python3.9[223832]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:25:20 compute-0 sudo[223830]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:21 compute-0 sudo[223984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbonpwuwjmunnercnhwhgfwclgmygbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325121.0675192-78-72043937509922/AnsiballZ_systemd.py'
Oct 01 13:25:21 compute-0 sudo[223984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:21 compute-0 ceph-mon[74802]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:22 compute-0 python3.9[223986]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:25:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:22 compute-0 systemd[1]: Reloading.
Oct 01 13:25:22 compute-0 systemd-rc-local-generator[224017]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:25:22 compute-0 systemd-sysv-generator[224023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:25:22 compute-0 sudo[223984]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:23 compute-0 sudo[224174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcrbdvkobsvxxkdwikwewkrqrvochwrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325122.8559737-86-98231838200496/AnsiballZ_service_facts.py'
Oct 01 13:25:23 compute-0 sudo[224174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:23 compute-0 python3.9[224176]: ansible-ansible.builtin.service_facts Invoked
Oct 01 13:25:23 compute-0 network[224193]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:25:23 compute-0 network[224194]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:25:23 compute-0 network[224195]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:25:23 compute-0 ceph-mon[74802]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:25 compute-0 ceph-mon[74802]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:27 compute-0 ceph-mon[74802]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:28 compute-0 sudo[224174]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:28 compute-0 sudo[224467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oswdwcqzqybpyhcrfhdiyotkxnouxhsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325128.3132234-94-12011564413011/AnsiballZ_systemd.py'
Oct 01 13:25:28 compute-0 sudo[224467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:28 compute-0 python3.9[224469]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:25:29 compute-0 systemd[1]: Reloading.
Oct 01 13:25:29 compute-0 systemd-rc-local-generator[224498]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:25:29 compute-0 systemd-sysv-generator[224502]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:25:29 compute-0 ceph-mon[74802]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:29 compute-0 sudo[224467]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:30 compute-0 python3.9[224656]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:25:31 compute-0 sudo[224806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mladcaexyzmajxanfdgjrjqhyqtcbvjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325130.497868-111-187276563465973/AnsiballZ_podman_container.py'
Oct 01 13:25:31 compute-0 sudo[224806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:31 compute-0 ceph-mon[74802]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:31 compute-0 python3.9[224808]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22 name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 01 13:25:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:32 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:25:32 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:25:33 compute-0 podman[224820]: 2025-10-01 13:25:33.146246145 +0000 UTC m=+1.631618122 image pull 4c2cf735485aec82560a51e8042a9e65bbe194a07c6812512d6a5e2ed955852b quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22
Oct 01 13:25:33 compute-0 ceph-mon[74802]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:33 compute-0 podman[224880]: 2025-10-01 13:25:33.311555882 +0000 UTC m=+0.050247165 container create 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.3680] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct 01 13:25:33 compute-0 podman[224880]: 2025-10-01 13:25:33.288908987 +0000 UTC m=+0.027600310 image pull 4c2cf735485aec82560a51e8042a9e65bbe194a07c6812512d6a5e2ed955852b quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22
Oct 01 13:25:33 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 01 13:25:33 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 01 13:25:33 compute-0 kernel: veth0: entered allmulticast mode
Oct 01 13:25:33 compute-0 kernel: veth0: entered promiscuous mode
Oct 01 13:25:33 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 01 13:25:33 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.3972] device (veth0): carrier: link connected
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.3982] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4007] device (podman0): carrier: link connected
Oct 01 13:25:33 compute-0 systemd-udevd[224904]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 13:25:33 compute-0 systemd-udevd[224906]: Network interface NamePolicy= disabled on kernel command line.
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4316] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4333] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4347] device (podman0): Activation: starting connection 'podman0' (1ea7e56b-b21f-4308-a7e0-4e8eb0d4c775)
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4350] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4355] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4359] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4363] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 01 13:25:33 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 01 13:25:33 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4801] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4804] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.4815] device (podman0): Activation: successful, device activated.
Oct 01 13:25:33 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 01 13:25:33 compute-0 systemd[1]: Started libpod-conmon-330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4.scope.
Oct 01 13:25:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:25:33 compute-0 podman[224880]: 2025-10-01 13:25:33.781308478 +0000 UTC m=+0.519999781 container init 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 01 13:25:33 compute-0 podman[224880]: 2025-10-01 13:25:33.794812148 +0000 UTC m=+0.533503421 container start 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS)
Oct 01 13:25:33 compute-0 podman[224880]: 2025-10-01 13:25:33.798559075 +0000 UTC m=+0.537250378 container attach 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:25:33 compute-0 iscsid_config[225037]: iqn.1994-05.com.redhat:d708ef469d6
Oct 01 13:25:33 compute-0 systemd[1]: libpod-330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4.scope: Deactivated successfully.
Oct 01 13:25:33 compute-0 podman[224880]: 2025-10-01 13:25:33.801979062 +0000 UTC m=+0.540670375 container died 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923)
Oct 01 13:25:33 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 01 13:25:33 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 01 13:25:33 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 01 13:25:33 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 01 13:25:33 compute-0 NetworkManager[45411]: <info>  [1759325133.8622] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 01 13:25:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:34 compute-0 systemd[1]: run-netns-netns\x2dfb9a2bda\x2d3d38\x2da405\x2d3108\x2d02651c9856ff.mount: Deactivated successfully.
Oct 01 13:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4-userdata-shm.mount: Deactivated successfully.
Oct 01 13:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4f1d6217c96bdaf4ecb291d4656f9ef5700e94ff84ce84497a5769e84e77ff1-merged.mount: Deactivated successfully.
Oct 01 13:25:34 compute-0 podman[224880]: 2025-10-01 13:25:34.340648943 +0000 UTC m=+1.079340226 container remove 330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 13:25:34 compute-0 python3.9[224808]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22 /usr/sbin/iscsi-iname
Oct 01 13:25:34 compute-0 systemd[1]: libpod-conmon-330e51b603aaa4841f4c5c5e54f1132df57ac245b8a5bf7dd93e4bf86281d8a4.scope: Deactivated successfully.
Oct 01 13:25:34 compute-0 python3.9[224808]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 01 13:25:34 compute-0 sudo[224806]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:35 compute-0 sudo[225278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcdlapatzqhrcymosogrlyktnabfbeqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325134.6887205-119-248016027975968/AnsiballZ_stat.py'
Oct 01 13:25:35 compute-0 sudo[225278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:35 compute-0 python3.9[225280]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:35 compute-0 sudo[225278]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:35 compute-0 ceph-mon[74802]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:35 compute-0 sudo[225401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcpadhunjpbwqijompmhbvfeoncyegjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325134.6887205-119-248016027975968/AnsiballZ_copy.py'
Oct 01 13:25:35 compute-0 sudo[225401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:36 compute-0 python3.9[225403]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325134.6887205-119-248016027975968/.source.iscsi _original_basename=.z437gr7x follow=False checksum=cf00cb9257c28bd43e3d04701f5a37e8933c1dfb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:36 compute-0 sudo[225401]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:36 compute-0 sudo[225553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myoxgjnyrnjelfrqqpfslfkfbecdkcvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325136.261624-134-136809116697510/AnsiballZ_file.py'
Oct 01 13:25:36 compute-0 sudo[225553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:36 compute-0 python3.9[225555]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:36 compute-0 sudo[225553]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:37 compute-0 ceph-mon[74802]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:37 compute-0 python3.9[225705]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:25:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:38 compute-0 sudo[225857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isputzcgtrdccackwphdysqvlyrtejuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325137.950795-151-133362666637453/AnsiballZ_lineinfile.py'
Oct 01 13:25:38 compute-0 sudo[225857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:38 compute-0 python3.9[225859]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:38 compute-0 sudo[225857]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:39 compute-0 sudo[226009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwfukiuxwyydcohlvesknlmemtqmijvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325138.9327447-160-271066136414739/AnsiballZ_file.py'
Oct 01 13:25:39 compute-0 sudo[226009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:39 compute-0 ceph-mon[74802]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:39 compute-0 python3.9[226011]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:39 compute-0 sudo[226009]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:40 compute-0 sudo[226161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajsceeezksvfkhnutrshuyyyksiamuyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325139.7319012-168-45826301271733/AnsiballZ_stat.py'
Oct 01 13:25:40 compute-0 sudo[226161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:40 compute-0 python3.9[226163]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:40 compute-0 sudo[226161]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:40 compute-0 sudo[226239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czhfuxxbegsqhaatqraumvfcoqwnmepf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325139.7319012-168-45826301271733/AnsiballZ_file.py'
Oct 01 13:25:40 compute-0 sudo[226239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:40 compute-0 python3.9[226241]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:40 compute-0 sudo[226239]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:41 compute-0 sudo[226391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elgmgakbmsnnvqsyscviczaryyxtkotf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325141.0071266-168-267547007475950/AnsiballZ_stat.py'
Oct 01 13:25:41 compute-0 sudo[226391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:41 compute-0 ceph-mon[74802]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:41 compute-0 python3.9[226393]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:41 compute-0 sudo[226391]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:41 compute-0 sudo[226481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqhwolqvmujhdeqlkodmvvflmciprwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325141.0071266-168-267547007475950/AnsiballZ_file.py'
Oct 01 13:25:41 compute-0 sudo[226481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:41 compute-0 podman[226443]: 2025-10-01 13:25:41.903351161 +0000 UTC m=+0.096751544 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 13:25:42 compute-0 python3.9[226489]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:42 compute-0 sudo[226481]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:42 compute-0 sudo[226647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcgekxprtuxulentmeboityhofnadpzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325142.3102343-191-145293015094300/AnsiballZ_file.py'
Oct 01 13:25:42 compute-0 sudo[226647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:42 compute-0 python3.9[226649]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:42 compute-0 sudo[226647]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:43 compute-0 sudo[226799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txtqeesdpfqsivovygmbrehmkwrtmdgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325143.0546238-199-162049488882837/AnsiballZ_stat.py'
Oct 01 13:25:43 compute-0 sudo[226799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:43 compute-0 ceph-mon[74802]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:43 compute-0 python3.9[226801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:43 compute-0 sudo[226799]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:43 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 01 13:25:43 compute-0 sudo[226879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bptnulrfbbhnijewtcjuurwhumcxwcus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325143.0546238-199-162049488882837/AnsiballZ_file.py'
Oct 01 13:25:43 compute-0 sudo[226879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:44 compute-0 python3.9[226881]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:44 compute-0 sudo[226879]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:44 compute-0 sshd-session[226804]: Invalid user caja2 from 80.253.31.232 port 45214
Oct 01 13:25:44 compute-0 sshd-session[226804]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:25:44 compute-0 sshd-session[226804]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:25:44 compute-0 sudo[227031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zteapflaotnhzfnlrlrshorkmxndaiek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325144.4591672-211-176255726674415/AnsiballZ_stat.py'
Oct 01 13:25:44 compute-0 sudo[227031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:45 compute-0 python3.9[227033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:45 compute-0 sudo[227031]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:45 compute-0 sudo[227109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcpttjqbestvlfjpqyryutzzwgbvibgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325144.4591672-211-176255726674415/AnsiballZ_file.py'
Oct 01 13:25:45 compute-0 sudo[227109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:45 compute-0 ceph-mon[74802]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:45 compute-0 python3.9[227111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:45 compute-0 sudo[227109]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:46 compute-0 sudo[227272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffcvixkrcbefwuuzffxxuhpwycvplswc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325145.9125621-223-153606812788836/AnsiballZ_systemd.py'
Oct 01 13:25:46 compute-0 sudo[227272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:46 compute-0 podman[227235]: 2025-10-01 13:25:46.318647403 +0000 UTC m=+0.075952416 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 13:25:46 compute-0 python3.9[227284]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:25:46 compute-0 systemd[1]: Reloading.
Oct 01 13:25:46 compute-0 systemd-rc-local-generator[227305]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:25:46 compute-0 systemd-sysv-generator[227308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:25:46 compute-0 sshd-session[226804]: Failed password for invalid user caja2 from 80.253.31.232 port 45214 ssh2
Oct 01 13:25:47 compute-0 sudo[227272]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:47 compute-0 ceph-mon[74802]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:47 compute-0 sudo[227471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpckusptcmwrmqefjemxzzgwggonhotu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325147.2338471-231-39859197224019/AnsiballZ_stat.py'
Oct 01 13:25:47 compute-0 sudo[227471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:25:47
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.control', 'images', 'default.rgw.log']
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:25:47 compute-0 python3.9[227473]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:47 compute-0 sudo[227471]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:25:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:25:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:48 compute-0 sudo[227549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erisablutyqixrwkkmgvxfumtgumdsvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325147.2338471-231-39859197224019/AnsiballZ_file.py'
Oct 01 13:25:48 compute-0 sudo[227549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:48 compute-0 sshd-session[226804]: Received disconnect from 80.253.31.232 port 45214:11: Bye Bye [preauth]
Oct 01 13:25:48 compute-0 sshd-session[226804]: Disconnected from invalid user caja2 80.253.31.232 port 45214 [preauth]
Oct 01 13:25:48 compute-0 python3.9[227551]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:48 compute-0 sudo[227549]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:48 compute-0 sudo[227701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhuqdkjzfkcnqggcsdppwrbfnsktuufl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325148.5764863-243-254577960232561/AnsiballZ_stat.py'
Oct 01 13:25:48 compute-0 sudo[227701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:49 compute-0 python3.9[227703]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:49 compute-0 sudo[227701]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:49 compute-0 sudo[227779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgqeuunxjzwtgygrwugdcjnoisgtgzug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325148.5764863-243-254577960232561/AnsiballZ_file.py'
Oct 01 13:25:49 compute-0 sudo[227779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:49 compute-0 python3.9[227781]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:49 compute-0 sudo[227779]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:49 compute-0 ceph-mon[74802]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:50 compute-0 sudo[227931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prhnqhknlowdlyzjmarmeixcwioyhzyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325149.9496367-255-13390768290252/AnsiballZ_systemd.py'
Oct 01 13:25:50 compute-0 sudo[227931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:50 compute-0 python3.9[227933]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:25:50 compute-0 systemd[1]: Reloading.
Oct 01 13:25:50 compute-0 systemd-sysv-generator[227961]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:25:50 compute-0 systemd-rc-local-generator[227958]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:25:51 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 13:25:51 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 13:25:51 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 13:25:51 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 13:25:51 compute-0 sudo[227931]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:51 compute-0 sudo[228124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yimwqoxmfnxglkkwptxxoiwswcfczrsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325151.5518668-265-19988158540594/AnsiballZ_file.py'
Oct 01 13:25:51 compute-0 sudo[228124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:51 compute-0 ceph-mon[74802]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:52 compute-0 python3.9[228126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:52 compute-0 sudo[228124]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:52 compute-0 sudo[228276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojstrerketgypvvatztgoqpwkcocwfqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325152.485664-273-96854921709342/AnsiballZ_stat.py'
Oct 01 13:25:52 compute-0 sudo[228276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:53 compute-0 ceph-mon[74802]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:53 compute-0 python3.9[228278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:53 compute-0 sudo[228276]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:53 compute-0 sudo[228399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmsizsjmstdnujfpjbpznlvyoupymcec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325152.485664-273-96854921709342/AnsiballZ_copy.py'
Oct 01 13:25:53 compute-0 sudo[228399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:53 compute-0 python3.9[228401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325152.485664-273-96854921709342/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:53 compute-0 sudo[228399]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:54 compute-0 sudo[228551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liozkswajirlfgdvyvfriydrnzxmifnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325154.1226292-290-19298671737102/AnsiballZ_file.py'
Oct 01 13:25:54 compute-0 sudo[228551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:54 compute-0 python3.9[228553]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:25:54 compute-0 sudo[228551]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:55 compute-0 sudo[228703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlxgcelmevlyoaycontzhlubedzducfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325154.963235-298-221307292650473/AnsiballZ_stat.py'
Oct 01 13:25:55 compute-0 sudo[228703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:55 compute-0 ceph-mon[74802]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:55 compute-0 python3.9[228705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:25:55 compute-0 sudo[228703]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:56 compute-0 sudo[228826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogjmgzuvcxggsygdgzjbwcciewvxhnle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325154.963235-298-221307292650473/AnsiballZ_copy.py'
Oct 01 13:25:56 compute-0 sudo[228826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:25:56 compute-0 python3.9[228828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325154.963235-298-221307292650473/.source.json _original_basename=.uya_gf8w follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:56 compute-0 sudo[228826]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:57 compute-0 sudo[228978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gftqtemelnfsxdqgruudxasjslpqtzny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325156.7144704-313-183467153288159/AnsiballZ_file.py'
Oct 01 13:25:57 compute-0 sudo[228978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:25:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:25:57 compute-0 python3.9[228980]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:25:57 compute-0 sudo[228978]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:57 compute-0 ceph-mon[74802]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:57 compute-0 sudo[229132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrxhapcdjidnaaaurdwrsselaoxmciks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325157.5256495-321-258250084905975/AnsiballZ_stat.py'
Oct 01 13:25:57 compute-0 sudo[229132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:58 compute-0 sudo[229132]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:58 compute-0 sudo[229255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vorscjeccyejfwkzarujajfptogslyxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325157.5256495-321-258250084905975/AnsiballZ_copy.py'
Oct 01 13:25:58 compute-0 sudo[229255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:58 compute-0 sudo[229255]: pam_unix(sudo:session): session closed for user root
Oct 01 13:25:59 compute-0 sshd-session[229104]: Invalid user deniz from 27.254.137.144 port 33822
Oct 01 13:25:59 compute-0 sshd-session[229104]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:25:59 compute-0 sshd-session[229104]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:25:59 compute-0 ceph-mon[74802]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:25:59 compute-0 sudo[229407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzmgyeocbhnwikgyobuwidgyggmledhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325159.1907737-338-160400264273338/AnsiballZ_container_config_data.py'
Oct 01 13:25:59 compute-0 sudo[229407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:25:59 compute-0 python3.9[229409]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 01 13:26:00 compute-0 sudo[229407]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:00 compute-0 sudo[229559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssqcumzepkolcaisusfmtbkvdgjijexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325160.2488012-347-262940216587720/AnsiballZ_container_config_hash.py'
Oct 01 13:26:00 compute-0 sudo[229559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:00 compute-0 sshd-session[229104]: Failed password for invalid user deniz from 27.254.137.144 port 33822 ssh2
Oct 01 13:26:00 compute-0 python3.9[229561]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 13:26:01 compute-0 sudo[229559]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:01 compute-0 sshd-session[229104]: Received disconnect from 27.254.137.144 port 33822:11: Bye Bye [preauth]
Oct 01 13:26:01 compute-0 sshd-session[229104]: Disconnected from invalid user deniz 27.254.137.144 port 33822 [preauth]
Oct 01 13:26:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.318792) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161318848, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1762, "num_deletes": 250, "total_data_size": 2993291, "memory_usage": 3037544, "flush_reason": "Manual Compaction"}
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161357403, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1683894, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11837, "largest_seqno": 13598, "table_properties": {"data_size": 1678153, "index_size": 2880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14308, "raw_average_key_size": 20, "raw_value_size": 1665473, "raw_average_value_size": 2335, "num_data_blocks": 133, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324958, "oldest_key_time": 1759324958, "file_creation_time": 1759325161, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 38658 microseconds, and 5984 cpu microseconds.
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.357453) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1683894 bytes OK
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.357472) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.369643) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.369667) EVENT_LOG_v1 {"time_micros": 1759325161369660, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.369692) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2985819, prev total WAL file size 2985819, number of live WAL files 2.
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.370678) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1644KB)], [29(7836KB)]
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161370765, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9708428, "oldest_snapshot_seqno": -1}
Oct 01 13:26:01 compute-0 sshd-session[229562]: Invalid user admin from 200.7.101.139 port 58280
Oct 01 13:26:01 compute-0 sshd-session[229562]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:26:01 compute-0 sshd-session[229562]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4031 keys, 7670468 bytes, temperature: kUnknown
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161436474, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7670468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7641362, "index_size": 17924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 95923, "raw_average_key_size": 23, "raw_value_size": 7566543, "raw_average_value_size": 1877, "num_data_blocks": 779, "num_entries": 4031, "num_filter_entries": 4031, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325161, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.436936) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7670468 bytes
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.440850) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.4 rd, 116.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.7 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.3) write-amplify(4.6) OK, records in: 4447, records dropped: 416 output_compression: NoCompression
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.440893) EVENT_LOG_v1 {"time_micros": 1759325161440874, "job": 12, "event": "compaction_finished", "compaction_time_micros": 65876, "compaction_time_cpu_micros": 19986, "output_level": 6, "num_output_files": 1, "total_output_size": 7670468, "num_input_records": 4447, "num_output_records": 4031, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161441662, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325161444628, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.370599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:26:01 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:26:01.444778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:26:01 compute-0 ceph-mon[74802]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:01 compute-0 sudo[229713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcpdpwbmedvbbhklnjogeqmyqkqjztld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325161.304179-356-42353953265141/AnsiballZ_podman_container_info.py'
Oct 01 13:26:01 compute-0 sudo[229713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:02 compute-0 python3.9[229715]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 01 13:26:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:02 compute-0 sudo[229713]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:03 compute-0 sshd-session[229562]: Failed password for invalid user admin from 200.7.101.139 port 58280 ssh2
Oct 01 13:26:03 compute-0 sudo[229892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksklnmogjoeugqyzdcgfnxfsbbbgnypq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759325163.013416-369-232307889567150/AnsiballZ_edpm_container_manage.py'
Oct 01 13:26:03 compute-0 sudo[229892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:03 compute-0 ceph-mon[74802]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:03 compute-0 sshd-session[229562]: Received disconnect from 200.7.101.139 port 58280:11: Bye Bye [preauth]
Oct 01 13:26:03 compute-0 sshd-session[229562]: Disconnected from invalid user admin 200.7.101.139 port 58280 [preauth]
Oct 01 13:26:03 compute-0 python3[229894]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 13:26:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:04 compute-0 podman[229932]: 2025-10-01 13:26:04.145610503 +0000 UTC m=+0.024142693 image pull 4c2cf735485aec82560a51e8042a9e65bbe194a07c6812512d6a5e2ed955852b quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22
Oct 01 13:26:04 compute-0 podman[229932]: 2025-10-01 13:26:04.297534644 +0000 UTC m=+0.176066854 container create c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20250923)
Oct 01 13:26:04 compute-0 python3[229894]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22
Oct 01 13:26:04 compute-0 sudo[229892]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:05 compute-0 sudo[230121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imxxoviubrfxipqpvytxrbpxojrgtuam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325164.7216733-377-110177037709528/AnsiballZ_stat.py'
Oct 01 13:26:05 compute-0 sudo[230121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:05 compute-0 python3.9[230123]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:26:05 compute-0 sudo[230121]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:05 compute-0 ceph-mon[74802]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:06 compute-0 sudo[230275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnvondfckjfraazrjrqtwapibvauxbng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325165.6933217-386-240596712322907/AnsiballZ_file.py'
Oct 01 13:26:06 compute-0 sudo[230275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:06 compute-0 python3.9[230277]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:06 compute-0 sudo[230275]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:06 compute-0 sudo[230351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqfczrcrgwmubrcjetjyslivjekduosk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325165.6933217-386-240596712322907/AnsiballZ_stat.py'
Oct 01 13:26:06 compute-0 sudo[230351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:06 compute-0 python3.9[230353]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:26:06 compute-0 sudo[230351]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:07 compute-0 sudo[230502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmjfutnhzcovcsojvrodiwiyrnyjqwiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325166.78334-386-229266066832100/AnsiballZ_copy.py'
Oct 01 13:26:07 compute-0 sudo[230502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:07 compute-0 python3.9[230504]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759325166.78334-386-229266066832100/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:07 compute-0 sudo[230502]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:07 compute-0 ceph-mon[74802]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:07 compute-0 sudo[230578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnlrgzqxccfkxhjpjwquwzjflvebdqzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325166.78334-386-229266066832100/AnsiballZ_systemd.py'
Oct 01 13:26:07 compute-0 sudo[230578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:08 compute-0 python3.9[230580]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 13:26:08 compute-0 systemd[1]: Reloading.
Oct 01 13:26:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:08 compute-0 systemd-rc-local-generator[230601]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:26:08 compute-0 systemd-sysv-generator[230606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:26:08 compute-0 sudo[230578]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:08 compute-0 sudo[230689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrjgaxxxnnxuoceqxxzyjpbuuonohmke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325166.78334-386-229266066832100/AnsiballZ_systemd.py'
Oct 01 13:26:08 compute-0 sudo[230689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:09 compute-0 python3.9[230691]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:26:09 compute-0 systemd[1]: Reloading.
Oct 01 13:26:09 compute-0 systemd-rc-local-generator[230744]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:26:09 compute-0 systemd-sysv-generator[230747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:26:09 compute-0 sudo[230695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:26:09 compute-0 sudo[230695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:09 compute-0 sudo[230695]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:09 compute-0 systemd[1]: Starting iscsid container...
Oct 01 13:26:09 compute-0 sudo[230757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:26:09 compute-0 sudo[230757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:09 compute-0 sudo[230757]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ef4560b82efa07bc6c8fef785be160e88f15c11a0780685435a1bc6e40f6db/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ef4560b82efa07bc6c8fef785be160e88f15c11a0780685435a1bc6e40f6db/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5ef4560b82efa07bc6c8fef785be160e88f15c11a0780685435a1bc6e40f6db/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:09 compute-0 sudo[230792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:26:09 compute-0 sudo[230792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:09 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d.
Oct 01 13:26:09 compute-0 sudo[230792]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:09 compute-0 podman[230756]: 2025-10-01 13:26:09.689265237 +0000 UTC m=+0.163721179 container init c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:26:09 compute-0 iscsid[230799]: + sudo -E kolla_set_configs
Oct 01 13:26:09 compute-0 podman[230756]: 2025-10-01 13:26:09.718185807 +0000 UTC m=+0.192641749 container start c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:26:09 compute-0 podman[230756]: iscsid
Oct 01 13:26:09 compute-0 systemd[1]: Started iscsid container.
Oct 01 13:26:09 compute-0 sudo[230825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:26:09 compute-0 sudo[230825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:09 compute-0 sudo[230689]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:09 compute-0 sudo[230839]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 01 13:26:09 compute-0 ceph-mon[74802]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:09 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 01 13:26:09 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 01 13:26:09 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 01 13:26:09 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 01 13:26:09 compute-0 podman[230834]: 2025-10-01 13:26:09.852889301 +0000 UTC m=+0.119247894 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 01 13:26:09 compute-0 systemd[1]: c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d-64b353af6c8de46d.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 13:26:09 compute-0 systemd[1]: c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d-64b353af6c8de46d.service: Failed with result 'exit-code'.
Oct 01 13:26:09 compute-0 systemd[230898]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 01 13:26:09 compute-0 systemd[230898]: Queued start job for default target Main User Target.
Oct 01 13:26:10 compute-0 systemd[230898]: Created slice User Application Slice.
Oct 01 13:26:10 compute-0 systemd[230898]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 01 13:26:10 compute-0 systemd[230898]: Started Daily Cleanup of User's Temporary Directories.
Oct 01 13:26:10 compute-0 systemd[230898]: Reached target Paths.
Oct 01 13:26:10 compute-0 systemd[230898]: Reached target Timers.
Oct 01 13:26:10 compute-0 systemd[230898]: Starting D-Bus User Message Bus Socket...
Oct 01 13:26:10 compute-0 systemd[230898]: Starting Create User's Volatile Files and Directories...
Oct 01 13:26:10 compute-0 systemd[230898]: Finished Create User's Volatile Files and Directories.
Oct 01 13:26:10 compute-0 systemd[230898]: Listening on D-Bus User Message Bus Socket.
Oct 01 13:26:10 compute-0 systemd[230898]: Reached target Sockets.
Oct 01 13:26:10 compute-0 systemd[230898]: Reached target Basic System.
Oct 01 13:26:10 compute-0 systemd[230898]: Reached target Main User Target.
Oct 01 13:26:10 compute-0 systemd[230898]: Startup finished in 167ms.
Oct 01 13:26:10 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 01 13:26:10 compute-0 systemd[1]: Started Session c3 of User root.
Oct 01 13:26:10 compute-0 sudo[230839]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 13:26:10 compute-0 iscsid[230799]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 13:26:10 compute-0 iscsid[230799]: INFO:__main__:Validating config file
Oct 01 13:26:10 compute-0 iscsid[230799]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 13:26:10 compute-0 iscsid[230799]: INFO:__main__:Writing out command to execute
Oct 01 13:26:10 compute-0 sudo[230839]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:10 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 01 13:26:10 compute-0 iscsid[230799]: ++ cat /run_command
Oct 01 13:26:10 compute-0 iscsid[230799]: + CMD='/usr/sbin/iscsid -f'
Oct 01 13:26:10 compute-0 iscsid[230799]: + ARGS=
Oct 01 13:26:10 compute-0 iscsid[230799]: + sudo kolla_copy_cacerts
Oct 01 13:26:10 compute-0 sudo[231010]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 01 13:26:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:10 compute-0 systemd[1]: Started Session c4 of User root.
Oct 01 13:26:10 compute-0 sudo[231010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 13:26:10 compute-0 sudo[231010]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:10 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 01 13:26:10 compute-0 iscsid[230799]: + [[ ! -n '' ]]
Oct 01 13:26:10 compute-0 iscsid[230799]: + . kolla_extend_start
Oct 01 13:26:10 compute-0 iscsid[230799]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 01 13:26:10 compute-0 iscsid[230799]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 01 13:26:10 compute-0 iscsid[230799]: Running command: '/usr/sbin/iscsid -f'
Oct 01 13:26:10 compute-0 iscsid[230799]: + umask 0022
Oct 01 13:26:10 compute-0 iscsid[230799]: + exec /usr/sbin/iscsid -f
Oct 01 13:26:10 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 01 13:26:10 compute-0 sudo[230825]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:26:10 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:26:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:26:10 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:26:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:26:10 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:26:10 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9b841195-ffb1-47f6-a587-2d92df367a3c does not exist
Oct 01 13:26:10 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 92ff68c5-e5da-4017-b376-fe70843f9204 does not exist
Oct 01 13:26:10 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8724994f-f4f1-4803-9e1b-44e8bd6fb9b1 does not exist
Oct 01 13:26:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:26:10 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:26:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:26:10 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:26:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:26:10 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:26:10 compute-0 sudo[231084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:26:10 compute-0 sudo[231084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:10 compute-0 sudo[231084]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:10 compute-0 python3.9[231083]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:26:10 compute-0 sudo[231109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:26:10 compute-0 sudo[231109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:10 compute-0 sudo[231109]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:10 compute-0 sudo[231134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:26:10 compute-0 sudo[231134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:10 compute-0 sudo[231134]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:10 compute-0 sudo[231183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:26:10 compute-0 sudo[231183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:26:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:26:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:26:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:26:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:26:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:26:11 compute-0 sudo[231377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwlwikbrpsuuygxbvqeyvvwdzakvezgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325170.752619-423-90543339976125/AnsiballZ_file.py'
Oct 01 13:26:11 compute-0 sudo[231377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:11 compute-0 podman[231372]: 2025-10-01 13:26:11.181122006 +0000 UTC m=+0.071832947 container create 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 13:26:11 compute-0 systemd[1]: Started libpod-conmon-7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb.scope.
Oct 01 13:26:11 compute-0 podman[231372]: 2025-10-01 13:26:11.151402851 +0000 UTC m=+0.042113802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:26:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:26:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:11 compute-0 python3.9[231385]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:11 compute-0 sudo[231377]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:11 compute-0 podman[231372]: 2025-10-01 13:26:11.433626888 +0000 UTC m=+0.324337839 container init 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:26:11 compute-0 podman[231372]: 2025-10-01 13:26:11.44459613 +0000 UTC m=+0.335307061 container start 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:26:11 compute-0 inspiring_boyd[231394]: 167 167
Oct 01 13:26:11 compute-0 systemd[1]: libpod-7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb.scope: Deactivated successfully.
Oct 01 13:26:11 compute-0 podman[231372]: 2025-10-01 13:26:11.52457478 +0000 UTC m=+0.415285741 container attach 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:26:11 compute-0 podman[231372]: 2025-10-01 13:26:11.525272902 +0000 UTC m=+0.415983833 container died 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct 01 13:26:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a247247b4c1421999aaefe6332a1e47e5e647520daed3d49e37d6dbc55c705e7-merged.mount: Deactivated successfully.
Oct 01 13:26:11 compute-0 ceph-mon[74802]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:12 compute-0 sudo[231572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hllkqnqggnboidszyiykramyhveqfjoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325171.7379513-434-258331968262290/AnsiballZ_service_facts.py'
Oct 01 13:26:12 compute-0 sudo[231572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:26:12.292 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:26:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:26:12.293 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:26:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:26:12.293 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:26:12 compute-0 python3.9[231574]: ansible-ansible.builtin.service_facts Invoked
Oct 01 13:26:12 compute-0 podman[231372]: 2025-10-01 13:26:12.487383807 +0000 UTC m=+1.378094758 container remove 7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:26:12 compute-0 network[231592]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:26:12 compute-0 network[231594]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:26:12 compute-0 network[231595]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:26:12 compute-0 systemd[1]: libpod-conmon-7192f6c17b5d667b976caa6b88e4029ca4c891f05750f97cc268752ddc86aacb.scope: Deactivated successfully.
Oct 01 13:26:12 compute-0 podman[231510]: 2025-10-01 13:26:12.619981356 +0000 UTC m=+0.652460886 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 13:26:12 compute-0 podman[231619]: 2025-10-01 13:26:12.683164013 +0000 UTC m=+0.031650756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:26:12 compute-0 podman[231619]: 2025-10-01 13:26:12.82888852 +0000 UTC m=+0.177375283 container create 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:26:13 compute-0 ceph-mon[74802]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:13 compute-0 systemd[1]: Started libpod-conmon-7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e.scope.
Oct 01 13:26:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:13 compute-0 podman[231619]: 2025-10-01 13:26:13.527558154 +0000 UTC m=+0.876044917 container init 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:26:13 compute-0 podman[231619]: 2025-10-01 13:26:13.539484394 +0000 UTC m=+0.887971127 container start 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:26:13 compute-0 podman[231619]: 2025-10-01 13:26:13.564564676 +0000 UTC m=+0.913051449 container attach 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:26:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:14 compute-0 sharp_kalam[231637]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:26:14 compute-0 sharp_kalam[231637]: --> relative data size: 1.0
Oct 01 13:26:14 compute-0 sharp_kalam[231637]: --> All data devices are unavailable
Oct 01 13:26:14 compute-0 podman[231619]: 2025-10-01 13:26:14.789481974 +0000 UTC m=+2.137968707 container died 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 13:26:14 compute-0 systemd[1]: libpod-7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e.scope: Deactivated successfully.
Oct 01 13:26:14 compute-0 systemd[1]: libpod-7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e.scope: Consumed 1.187s CPU time.
Oct 01 13:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef4461db196969a292a4cb0a79b07185073fcb904e2c5327347bcf42cafe612-merged.mount: Deactivated successfully.
Oct 01 13:26:15 compute-0 podman[231619]: 2025-10-01 13:26:15.067863842 +0000 UTC m=+2.416350575 container remove 7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kalam, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:26:15 compute-0 sudo[231183]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:15 compute-0 sshd-session[231693]: Invalid user ismail from 156.236.31.46 port 45040
Oct 01 13:26:15 compute-0 systemd[1]: libpod-conmon-7d161ceb457d7b2a2965cc30e968027a26993b582c8aaac55a730042f93f771e.scope: Deactivated successfully.
Oct 01 13:26:15 compute-0 sshd-session[231693]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:26:15 compute-0 sshd-session[231693]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46
Oct 01 13:26:15 compute-0 sudo[231738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:26:15 compute-0 sudo[231738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:15 compute-0 sudo[231738]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:15 compute-0 sudo[231766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:26:15 compute-0 sudo[231766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:15 compute-0 sudo[231766]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:15 compute-0 ceph-mon[74802]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:15 compute-0 sudo[231792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:26:15 compute-0 sudo[231792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:15 compute-0 sudo[231792]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:15 compute-0 sudo[231817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:26:15 compute-0 sudo[231817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:15 compute-0 podman[231882]: 2025-10-01 13:26:15.842847891 +0000 UTC m=+0.097372813 container create 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:26:15 compute-0 podman[231882]: 2025-10-01 13:26:15.767558177 +0000 UTC m=+0.022083109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:26:16 compute-0 systemd[1]: Started libpod-conmon-52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e.scope.
Oct 01 13:26:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:26:16 compute-0 podman[231882]: 2025-10-01 13:26:16.129207586 +0000 UTC m=+0.383732578 container init 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:26:16 compute-0 podman[231882]: 2025-10-01 13:26:16.140315582 +0000 UTC m=+0.394840524 container start 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:26:16 compute-0 relaxed_saha[231898]: 167 167
Oct 01 13:26:16 compute-0 systemd[1]: libpod-52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e.scope: Deactivated successfully.
Oct 01 13:26:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:16 compute-0 podman[231882]: 2025-10-01 13:26:16.211664474 +0000 UTC m=+0.466189416 container attach 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:26:16 compute-0 podman[231882]: 2025-10-01 13:26:16.212158439 +0000 UTC m=+0.466683351 container died 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:26:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ecbdf2071b0a683fa0c362aa8cdedc084cd7d5c0556e6b7b5b420486708ed14-merged.mount: Deactivated successfully.
Oct 01 13:26:16 compute-0 podman[231882]: 2025-10-01 13:26:16.681386499 +0000 UTC m=+0.935911421 container remove 52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_saha, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:26:16 compute-0 systemd[1]: libpod-conmon-52536ebcdfdf4bb05eadce262ae85390ae9dcbb463d4b552c9e8f24c7f6a7e9e.scope: Deactivated successfully.
Oct 01 13:26:16 compute-0 podman[231920]: 2025-10-01 13:26:16.786103309 +0000 UTC m=+0.418356687 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:26:16 compute-0 podman[231966]: 2025-10-01 13:26:16.902168532 +0000 UTC m=+0.039874201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:26:17 compute-0 podman[231966]: 2025-10-01 13:26:17.022237631 +0000 UTC m=+0.159943240 container create 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 13:26:17 compute-0 systemd[1]: Started libpod-conmon-693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f.scope.
Oct 01 13:26:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:17 compute-0 podman[231966]: 2025-10-01 13:26:17.192801562 +0000 UTC m=+0.330507221 container init 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:26:17 compute-0 podman[231966]: 2025-10-01 13:26:17.203871297 +0000 UTC m=+0.341576876 container start 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:26:17 compute-0 podman[231966]: 2025-10-01 13:26:17.223084605 +0000 UTC m=+0.360790204 container attach 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:26:17 compute-0 ceph-mon[74802]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:17 compute-0 sudo[231572]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:17 compute-0 sshd-session[231693]: Failed password for invalid user ismail from 156.236.31.46 port 45040 ssh2
Oct 01 13:26:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:26:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:26:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:26:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:26:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:26:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]: {
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:     "0": [
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:         {
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "devices": [
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "/dev/loop3"
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             ],
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_name": "ceph_lv0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_size": "21470642176",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "name": "ceph_lv0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "tags": {
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.cluster_name": "ceph",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.crush_device_class": "",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.encrypted": "0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.osd_id": "0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.type": "block",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.vdo": "0"
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             },
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "type": "block",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "vg_name": "ceph_vg0"
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:         }
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:     ],
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:     "1": [
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:         {
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "devices": [
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "/dev/loop4"
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             ],
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_name": "ceph_lv1",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_size": "21470642176",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "name": "ceph_lv1",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "tags": {
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.cluster_name": "ceph",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.crush_device_class": "",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.encrypted": "0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.osd_id": "1",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.type": "block",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.vdo": "0"
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             },
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "type": "block",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "vg_name": "ceph_vg1"
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:         }
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:     ],
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:     "2": [
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:         {
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "devices": [
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "/dev/loop5"
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             ],
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_name": "ceph_lv2",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_size": "21470642176",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "name": "ceph_lv2",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "tags": {
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.cluster_name": "ceph",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.crush_device_class": "",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.encrypted": "0",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.osd_id": "2",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.type": "block",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:                 "ceph.vdo": "0"
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             },
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "type": "block",
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:             "vg_name": "ceph_vg2"
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:         }
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]:     ]
Oct 01 13:26:18 compute-0 vigorous_mcclintock[231993]: }
Oct 01 13:26:18 compute-0 systemd[1]: libpod-693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f.scope: Deactivated successfully.
Oct 01 13:26:18 compute-0 podman[231966]: 2025-10-01 13:26:18.058609749 +0000 UTC m=+1.196315348 container died 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c2e87fb828dfeb4f3a15924e22ce5cbf38564ff393199633400d8b1e7313391-merged.mount: Deactivated successfully.
Oct 01 13:26:18 compute-0 podman[231966]: 2025-10-01 13:26:18.137534206 +0000 UTC m=+1.275239785 container remove 693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mcclintock, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 13:26:18 compute-0 systemd[1]: libpod-conmon-693d0a89da7dee0c729fc58ad0cf7ef5c5f156c4271dd9f99f46c5ee17849b1f.scope: Deactivated successfully.
Oct 01 13:26:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:18 compute-0 sudo[231817]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:18 compute-0 sudo[232122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:26:18 compute-0 sudo[232122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:18 compute-0 sudo[232122]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:18 compute-0 sudo[232174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:26:18 compute-0 sudo[232174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:18 compute-0 sudo[232174]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:18 compute-0 sudo[232238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpppvjzcmibtxpbansnmnsosdpydfyze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325178.078469-444-37320477985945/AnsiballZ_file.py'
Oct 01 13:26:18 compute-0 sudo[232238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:18 compute-0 sudo[232236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:26:18 compute-0 sudo[232236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:18 compute-0 sudo[232236]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:18 compute-0 sudo[232264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:26:18 compute-0 sudo[232264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:18 compute-0 python3.9[232256]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 13:26:18 compute-0 sudo[232238]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:19 compute-0 podman[232376]: 2025-10-01 13:26:19.07871634 +0000 UTC m=+0.126572972 container create c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:26:19 compute-0 podman[232376]: 2025-10-01 13:26:18.992677282 +0000 UTC m=+0.040534004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:26:19 compute-0 systemd[1]: Started libpod-conmon-c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f.scope.
Oct 01 13:26:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:26:19 compute-0 podman[232376]: 2025-10-01 13:26:19.266286281 +0000 UTC m=+0.314142933 container init c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 13:26:19 compute-0 podman[232376]: 2025-10-01 13:26:19.276105756 +0000 UTC m=+0.323962378 container start c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:26:19 compute-0 crazy_haslett[232422]: 167 167
Oct 01 13:26:19 compute-0 systemd[1]: libpod-c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f.scope: Deactivated successfully.
Oct 01 13:26:19 compute-0 podman[232376]: 2025-10-01 13:26:19.383263093 +0000 UTC m=+0.431119765 container attach c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 13:26:19 compute-0 podman[232376]: 2025-10-01 13:26:19.38380888 +0000 UTC m=+0.431665552 container died c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 13:26:19 compute-0 sudo[232511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woszqmxowilmgzoizqeifflstvctwtkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325178.914507-452-242490237880109/AnsiballZ_modprobe.py'
Oct 01 13:26:19 compute-0 sudo[232511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f0302a3ea695d8c915b6ccd923f165dcb8f1fb3735308c6874a7ea158ffbfea-merged.mount: Deactivated successfully.
Oct 01 13:26:19 compute-0 sshd-session[231693]: Received disconnect from 156.236.31.46 port 45040:11: Bye Bye [preauth]
Oct 01 13:26:19 compute-0 sshd-session[231693]: Disconnected from invalid user ismail 156.236.31.46 port 45040 [preauth]
Oct 01 13:26:19 compute-0 podman[232376]: 2025-10-01 13:26:19.562357969 +0000 UTC m=+0.610214631 container remove c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_haslett, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:26:19 compute-0 systemd[1]: libpod-conmon-c2319e57467130c62d0358eb20ae3406effb9123d8b607d3a01df73e84181e9f.scope: Deactivated successfully.
Oct 01 13:26:19 compute-0 python3.9[232513]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 01 13:26:19 compute-0 ceph-mon[74802]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:19 compute-0 podman[232522]: 2025-10-01 13:26:19.801484574 +0000 UTC m=+0.070557978 container create 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:26:19 compute-0 sudo[232511]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:19 compute-0 podman[232522]: 2025-10-01 13:26:19.77150351 +0000 UTC m=+0.040576954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:26:19 compute-0 systemd[1]: Started libpod-conmon-2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966.scope.
Oct 01 13:26:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:26:19 compute-0 podman[232522]: 2025-10-01 13:26:19.941183594 +0000 UTC m=+0.210256968 container init 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:26:19 compute-0 podman[232522]: 2025-10-01 13:26:19.955616953 +0000 UTC m=+0.224690317 container start 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:26:19 compute-0 podman[232522]: 2025-10-01 13:26:19.95938592 +0000 UTC m=+0.228459284 container attach 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:26:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:20 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 01 13:26:20 compute-0 systemd[230898]: Activating special unit Exit the Session...
Oct 01 13:26:20 compute-0 systemd[230898]: Stopped target Main User Target.
Oct 01 13:26:20 compute-0 systemd[230898]: Stopped target Basic System.
Oct 01 13:26:20 compute-0 systemd[230898]: Stopped target Paths.
Oct 01 13:26:20 compute-0 systemd[230898]: Stopped target Sockets.
Oct 01 13:26:20 compute-0 systemd[230898]: Stopped target Timers.
Oct 01 13:26:20 compute-0 systemd[230898]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 01 13:26:20 compute-0 systemd[230898]: Closed D-Bus User Message Bus Socket.
Oct 01 13:26:20 compute-0 systemd[230898]: Stopped Create User's Volatile Files and Directories.
Oct 01 13:26:20 compute-0 systemd[230898]: Removed slice User Application Slice.
Oct 01 13:26:20 compute-0 systemd[230898]: Reached target Shutdown.
Oct 01 13:26:20 compute-0 systemd[230898]: Finished Exit the Session.
Oct 01 13:26:20 compute-0 systemd[230898]: Reached target Exit the Session.
Oct 01 13:26:20 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 01 13:26:20 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 01 13:26:20 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 01 13:26:20 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 01 13:26:20 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 01 13:26:20 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 01 13:26:20 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 01 13:26:20 compute-0 sudo[232697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdsuowxfbgwxfzmgwqcwxurgoqmmlmkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325180.0600505-460-46736324303189/AnsiballZ_stat.py'
Oct 01 13:26:20 compute-0 sudo[232697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:20 compute-0 python3.9[232699]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:20 compute-0 sudo[232697]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:20 compute-0 fervent_almeida[232548]: {
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "osd_id": 0,
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "type": "bluestore"
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:     },
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "osd_id": 2,
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "type": "bluestore"
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:     },
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "osd_id": 1,
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:         "type": "bluestore"
Oct 01 13:26:20 compute-0 fervent_almeida[232548]:     }
Oct 01 13:26:20 compute-0 fervent_almeida[232548]: }
Oct 01 13:26:21 compute-0 systemd[1]: libpod-2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966.scope: Deactivated successfully.
Oct 01 13:26:21 compute-0 systemd[1]: libpod-2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966.scope: Consumed 1.075s CPU time.
Oct 01 13:26:21 compute-0 podman[232522]: 2025-10-01 13:26:21.033819834 +0000 UTC m=+1.302893238 container died 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:26:21 compute-0 sudo[232848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppcjacdjslephvsdebwoxocpwmwzrycv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325180.0600505-460-46736324303189/AnsiballZ_copy.py'
Oct 01 13:26:21 compute-0 sudo[232848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf5241d048e8542226a80fcd6fbfbdc77ad29b0143d89fc685caddfa9b785280-merged.mount: Deactivated successfully.
Oct 01 13:26:21 compute-0 podman[232522]: 2025-10-01 13:26:21.124566319 +0000 UTC m=+1.393639693 container remove 2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_almeida, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:26:21 compute-0 systemd[1]: libpod-conmon-2812f2a67a5c8bc9662635300febd5acda7b014bbfe2bbdfc4ecbdc65f3f2966.scope: Deactivated successfully.
Oct 01 13:26:21 compute-0 sudo[232264]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:26:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:26:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:26:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:26:21 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev b1771bd7-3b7d-4976-a959-9a04e3a68dcf does not exist
Oct 01 13:26:21 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e7e689cf-262c-45fc-9b4d-4df34bf075db does not exist
Oct 01 13:26:21 compute-0 python3.9[232851]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325180.0600505-460-46736324303189/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:21 compute-0 sudo[232848]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:21 compute-0 sudo[232864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:26:21 compute-0 sudo[232864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:21 compute-0 sudo[232864]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:21 compute-0 sudo[232906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:26:21 compute-0 sudo[232906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:26:21 compute-0 sudo[232906]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:21 compute-0 ceph-mon[74802]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:26:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:26:21 compute-0 sudo[233063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvbxiowcogdeqjnweixhkerjujurquob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325181.5388746-476-33926781096024/AnsiballZ_lineinfile.py'
Oct 01 13:26:21 compute-0 sudo[233063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:22 compute-0 python3.9[233065]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:22 compute-0 sudo[233063]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:22 compute-0 sudo[233215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-empstysfadfzykbetpvjxhxrmuyuccfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325182.2772548-484-48738084411352/AnsiballZ_systemd.py'
Oct 01 13:26:22 compute-0 sudo[233215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:22 compute-0 python3.9[233217]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:26:23 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 01 13:26:23 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 01 13:26:23 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 01 13:26:23 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 01 13:26:23 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 01 13:26:23 compute-0 sudo[233215]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:23 compute-0 sudo[233371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrhmptxwdfupirektgejwdkswebstkey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325183.2507036-492-195379923040767/AnsiballZ_file.py'
Oct 01 13:26:23 compute-0 sudo[233371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:23 compute-0 python3.9[233373]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:26:23 compute-0 sudo[233371]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:23 compute-0 ceph-mon[74802]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:24 compute-0 sudo[233523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bngmmjgcbegywkqqrpzwlhkzexbcwotr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325184.1002629-501-229872871730066/AnsiballZ_stat.py'
Oct 01 13:26:24 compute-0 sudo[233523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:24 compute-0 python3.9[233525]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:26:24 compute-0 sudo[233523]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:25 compute-0 ceph-mon[74802]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:25 compute-0 sudo[233675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwbhporpfummustyhwvmitoroijwmpkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325184.9328601-510-269439678099570/AnsiballZ_stat.py'
Oct 01 13:26:25 compute-0 sudo[233675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:25 compute-0 python3.9[233677]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:26:25 compute-0 sudo[233675]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:25 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 01 13:26:26 compute-0 sudo[233828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maduxiitvbtdlaploqlwwxipwdrytdei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325185.686633-518-161029049689613/AnsiballZ_stat.py'
Oct 01 13:26:26 compute-0 sudo[233828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:26 compute-0 python3.9[233830]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:26 compute-0 sudo[233828]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:26 compute-0 sudo[233951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvizupkimwetksrpeyetqatlmcfoohwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325185.686633-518-161029049689613/AnsiballZ_copy.py'
Oct 01 13:26:26 compute-0 sudo[233951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:26 compute-0 python3.9[233953]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325185.686633-518-161029049689613/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:26 compute-0 sudo[233951]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:27 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 01 13:26:27 compute-0 ceph-mon[74802]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:27 compute-0 sudo[234104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euxepczwizhkouxevfoplxzczkcilrfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325187.0334408-533-113765327902334/AnsiballZ_command.py'
Oct 01 13:26:27 compute-0 sudo[234104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:27 compute-0 python3.9[234106]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:26:27 compute-0 sudo[234104]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:28 compute-0 sudo[234257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtksqqlmjkkxkghzlywfvejapxzwfayw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325187.872297-541-146291669233424/AnsiballZ_lineinfile.py'
Oct 01 13:26:28 compute-0 sudo[234257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:28 compute-0 python3.9[234259]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:28 compute-0 sudo[234257]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:29 compute-0 sudo[234409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onraojintmvdizvrjbalkrebehjbkutg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325188.6260288-549-113639526025771/AnsiballZ_replace.py'
Oct 01 13:26:29 compute-0 sudo[234409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:29 compute-0 python3.9[234411]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:29 compute-0 sudo[234409]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:29 compute-0 ceph-mon[74802]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:29 compute-0 sudo[234561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lievmqgxvtajrzeymxggcfppmthejrfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325189.5051305-557-72780239046656/AnsiballZ_replace.py'
Oct 01 13:26:29 compute-0 sudo[234561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:30 compute-0 python3.9[234563]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:30 compute-0 sudo[234561]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:30 compute-0 sudo[234713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szlfesokubxzuwuxvctwwqmcxhzbzgab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325190.382084-566-270083201606502/AnsiballZ_lineinfile.py'
Oct 01 13:26:30 compute-0 sudo[234713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:31 compute-0 python3.9[234715]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:31 compute-0 sudo[234713]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:31 compute-0 sudo[234865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igrqakfmhxyplatffrozucerpruxrqrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325191.2225409-566-175594188838302/AnsiballZ_lineinfile.py'
Oct 01 13:26:31 compute-0 sudo[234865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:31 compute-0 python3.9[234867]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:31 compute-0 ceph-mon[74802]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:31 compute-0 sudo[234865]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:32 compute-0 sudo[235017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nifdmvjrjeulzplpekoxyaogcwkkckme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325191.9238684-566-25192744576725/AnsiballZ_lineinfile.py'
Oct 01 13:26:32 compute-0 sudo[235017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:32 compute-0 python3.9[235019]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:32 compute-0 sudo[235017]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:32 compute-0 sudo[235169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qesfhcwdjlpatasfsrngityklfumkxul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325192.6486804-566-270923775363698/AnsiballZ_lineinfile.py'
Oct 01 13:26:32 compute-0 sudo[235169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:33 compute-0 ceph-mon[74802]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:33 compute-0 python3.9[235171]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:33 compute-0 sudo[235169]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:33 compute-0 sudo[235321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwmpvacfhnkirbjmqcmvalsfjncttqcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325193.5038326-595-142487148644581/AnsiballZ_stat.py'
Oct 01 13:26:33 compute-0 sudo[235321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:34 compute-0 python3.9[235323]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:26:34 compute-0 sudo[235321]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:34 compute-0 sudo[235475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llebrfbxsziegtsgnvgczvktyxobrfwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325194.361932-603-215767152823487/AnsiballZ_file.py'
Oct 01 13:26:34 compute-0 sudo[235475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:34 compute-0 python3.9[235477]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:34 compute-0 sudo[235475]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:35 compute-0 sudo[235627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzccpfhtkakbgddgraebmgmmaidzxiqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325195.3187952-612-260130372791900/AnsiballZ_file.py'
Oct 01 13:26:35 compute-0 sudo[235627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:35 compute-0 ceph-mon[74802]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:35 compute-0 python3.9[235629]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:26:36 compute-0 sudo[235627]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:36 compute-0 sudo[235779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugfyctuwpkdmbpdudbcmmaxdbptcdgce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325196.1757753-620-194564545610548/AnsiballZ_stat.py'
Oct 01 13:26:36 compute-0 sudo[235779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:36 compute-0 python3.9[235781]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:36 compute-0 sudo[235779]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:37 compute-0 ceph-mon[74802]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:37 compute-0 sudo[235857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leopomjzoawjtmiqlfjerloympgwlilr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325196.1757753-620-194564545610548/AnsiballZ_file.py'
Oct 01 13:26:37 compute-0 sudo[235857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:37 compute-0 python3.9[235859]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:26:37 compute-0 sudo[235857]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:37 compute-0 sudo[236009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfwjdqoigyjhzaclurgebdtkxepbcjog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325197.540773-620-261236003903300/AnsiballZ_stat.py'
Oct 01 13:26:37 compute-0 sudo[236009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:38 compute-0 python3.9[236011]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:38 compute-0 sudo[236009]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:38 compute-0 sudo[236087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czireoscsrixsfjydzlkpzobkvvghhpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325197.540773-620-261236003903300/AnsiballZ_file.py'
Oct 01 13:26:38 compute-0 sudo[236087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:38 compute-0 python3.9[236089]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:26:38 compute-0 sudo[236087]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:39 compute-0 sudo[236239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aglzapayggviwammfzlxncfsklzhhbsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325198.9301865-643-185022089135862/AnsiballZ_file.py'
Oct 01 13:26:39 compute-0 sudo[236239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:39 compute-0 python3.9[236241]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:39 compute-0 sudo[236239]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:39 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 01 13:26:39 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 01 13:26:39 compute-0 ceph-mon[74802]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:40 compute-0 sudo[236409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpzgowbsveuveuujdatkzsgzbonmxxii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325199.7565281-651-177307594218638/AnsiballZ_stat.py'
Oct 01 13:26:40 compute-0 sudo[236409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:40 compute-0 podman[236367]: 2025-10-01 13:26:40.140142857 +0000 UTC m=+0.112930818 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct 01 13:26:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:40 compute-0 python3.9[236414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:40 compute-0 sudo[236409]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:40 compute-0 sudo[236491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggrkdobuhmtstrgtefjytctbzjnkyeop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325199.7565281-651-177307594218638/AnsiballZ_file.py'
Oct 01 13:26:40 compute-0 sudo[236491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:40 compute-0 python3.9[236493]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:40 compute-0 sudo[236491]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:41 compute-0 ceph-mon[74802]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:41 compute-0 sudo[236643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzvgnwrlvhffyyxbkcbwvsiaowifmnxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325201.0730321-663-120604550536546/AnsiballZ_stat.py'
Oct 01 13:26:41 compute-0 sudo[236643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:41 compute-0 python3.9[236645]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:41 compute-0 sudo[236643]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:42 compute-0 sudo[236721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egiuidzpxfzueqqrhlcajrbpitymwhju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325201.0730321-663-120604550536546/AnsiballZ_file.py'
Oct 01 13:26:42 compute-0 sudo[236721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:42 compute-0 python3.9[236723]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:42 compute-0 sudo[236721]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:42 compute-0 sudo[236873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdazwckxultlfxsilomdlaqebdmboask ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325202.484754-675-253218461892796/AnsiballZ_systemd.py'
Oct 01 13:26:42 compute-0 sudo[236873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:43 compute-0 python3.9[236875]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:26:43 compute-0 systemd[1]: Reloading.
Oct 01 13:26:43 compute-0 systemd-sysv-generator[236903]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:26:43 compute-0 systemd-rc-local-generator[236896]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:26:44 compute-0 sudo[236873]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:44 compute-0 podman[236912]: 2025-10-01 13:26:44.075403862 +0000 UTC m=+0.145113609 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller)
Oct 01 13:26:44 compute-0 ceph-mon[74802]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:44 compute-0 sudo[237088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkvajdngykyfntodpxfqcijefvfusxtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325204.2481918-683-269342735395808/AnsiballZ_stat.py'
Oct 01 13:26:44 compute-0 sudo[237088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:44 compute-0 python3.9[237090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:44 compute-0 sudo[237088]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:45 compute-0 sudo[237166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjpfbipnfzgwjthdydpggqmwsmawbbtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325204.2481918-683-269342735395808/AnsiballZ_file.py'
Oct 01 13:26:45 compute-0 sudo[237166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:45 compute-0 ceph-mon[74802]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:45 compute-0 python3.9[237168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:45 compute-0 sudo[237166]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:46 compute-0 sudo[237318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixbfsrccirtsspmulhbgzcqybhfgzvac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325205.7755704-695-146988458429822/AnsiballZ_stat.py'
Oct 01 13:26:46 compute-0 sudo[237318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:46 compute-0 python3.9[237320]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:46 compute-0 sudo[237318]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:46 compute-0 sudo[237396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfjzzlpwfgnztsybjittkpuzodqlocve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325205.7755704-695-146988458429822/AnsiballZ_file.py'
Oct 01 13:26:46 compute-0 sudo[237396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:46 compute-0 podman[237398]: 2025-10-01 13:26:46.943531952 +0000 UTC m=+0.096214778 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 01 13:26:47 compute-0 python3.9[237399]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:47 compute-0 sudo[237396]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:47 compute-0 ceph-mon[74802]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:47 compute-0 sudo[237570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqqpobmisgwcsudfvewwyhrwbwxgxqha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325207.3063302-707-84660385648451/AnsiballZ_systemd.py'
Oct 01 13:26:47 compute-0 sudo[237570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:26:47
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'images', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:26:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:26:48 compute-0 python3.9[237572]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:26:48 compute-0 sshd-session[237483]: Invalid user pavel from 80.253.31.232 port 43246
Oct 01 13:26:48 compute-0 systemd[1]: Reloading.
Oct 01 13:26:48 compute-0 systemd-rc-local-generator[237599]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:26:48 compute-0 systemd-sysv-generator[237602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:26:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:48 compute-0 sshd-session[237483]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:26:48 compute-0 sshd-session[237483]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:26:49 compute-0 ceph-mon[74802]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:49 compute-0 systemd[1]: Starting Create netns directory...
Oct 01 13:26:49 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 01 13:26:49 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 01 13:26:49 compute-0 systemd[1]: Finished Create netns directory.
Oct 01 13:26:49 compute-0 sudo[237570]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:50 compute-0 sshd-session[237483]: Failed password for invalid user pavel from 80.253.31.232 port 43246 ssh2
Oct 01 13:26:50 compute-0 sudo[237762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubooxwfmuhyibsfpllewkvpchbhfyba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325210.066297-717-226338692764167/AnsiballZ_file.py'
Oct 01 13:26:50 compute-0 sudo[237762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:50 compute-0 python3.9[237764]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:26:50 compute-0 sudo[237762]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:51 compute-0 sudo[237914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogdenquwznpreklhdtimfnajhkkonzzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325211.0331264-725-6136777622655/AnsiballZ_stat.py'
Oct 01 13:26:51 compute-0 sudo[237914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:51 compute-0 python3.9[237916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:51 compute-0 sudo[237914]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:52 compute-0 ceph-mon[74802]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:52 compute-0 sudo[238037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anctffhandmkixneswpblmeeddgfefud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325211.0331264-725-6136777622655/AnsiballZ_copy.py'
Oct 01 13:26:52 compute-0 sudo[238037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:52 compute-0 python3.9[238039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325211.0331264-725-6136777622655/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:26:52 compute-0 sudo[238037]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:52 compute-0 sshd-session[237483]: Received disconnect from 80.253.31.232 port 43246:11: Bye Bye [preauth]
Oct 01 13:26:52 compute-0 sshd-session[237483]: Disconnected from invalid user pavel 80.253.31.232 port 43246 [preauth]
Oct 01 13:26:53 compute-0 ceph-mon[74802]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:53 compute-0 sudo[238189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cswovhkbniypfyxwlzflcvfhjremivde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325212.8322463-742-222859270748118/AnsiballZ_file.py'
Oct 01 13:26:53 compute-0 sudo[238189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:53 compute-0 python3.9[238191]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:26:53 compute-0 sudo[238189]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:54 compute-0 sudo[238341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfzkvhtynxcehdhvlfeaabmigiwzcbey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325213.7152958-750-140931777289309/AnsiballZ_stat.py'
Oct 01 13:26:54 compute-0 sudo[238341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:54 compute-0 python3.9[238343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:26:54 compute-0 sudo[238341]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:54 compute-0 sudo[238464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgzwauspqiymqgjtthjdcoqqbntukkbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325213.7152958-750-140931777289309/AnsiballZ_copy.py'
Oct 01 13:26:54 compute-0 sudo[238464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:55 compute-0 python3.9[238466]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325213.7152958-750-140931777289309/.source.json _original_basename=.yqgouhp8 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:55 compute-0 sudo[238464]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:55 compute-0 sudo[238616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivuknjilgmrktnqougioobesdyeonopk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325215.3311384-765-110092646405488/AnsiballZ_file.py'
Oct 01 13:26:55 compute-0 sudo[238616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:55 compute-0 python3.9[238618]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:26:55 compute-0 ceph-mon[74802]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:56 compute-0 sudo[238616]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:26:56 compute-0 sudo[238768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwtyppofeqqogyhbatyhsbdshdwynkdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325216.2535903-773-32381734257917/AnsiballZ_stat.py'
Oct 01 13:26:56 compute-0 sudo[238768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:56 compute-0 sudo[238768]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:26:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:26:57 compute-0 sudo[238891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwagrxyjxtstbhivkmjmvycihqxvcntn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325216.2535903-773-32381734257917/AnsiballZ_copy.py'
Oct 01 13:26:57 compute-0 sudo[238891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:57 compute-0 sudo[238891]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:57 compute-0 ceph-mon[74802]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:58 compute-0 sudo[239043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsazguyqzxmqlakbwysepvnptirqxfzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325217.9630322-790-202218324210138/AnsiballZ_container_config_data.py'
Oct 01 13:26:58 compute-0 sudo[239043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:58 compute-0 python3.9[239045]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 01 13:26:58 compute-0 sudo[239043]: pam_unix(sudo:session): session closed for user root
Oct 01 13:26:59 compute-0 ceph-mon[74802]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:26:59 compute-0 sudo[239195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eigidibzdckmcadoqdijycmckutpydch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325218.9744854-799-143941712065968/AnsiballZ_container_config_hash.py'
Oct 01 13:26:59 compute-0 sudo[239195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:26:59 compute-0 python3.9[239197]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 13:26:59 compute-0 sudo[239195]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:00 compute-0 sudo[239347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prlvzxuyijjyycqhiwldkgsnnxzueibj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325220.0185616-808-145788464305090/AnsiballZ_podman_container_info.py'
Oct 01 13:27:00 compute-0 sudo[239347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:00 compute-0 python3.9[239349]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 01 13:27:01 compute-0 sudo[239347]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:02 compute-0 sudo[239526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyrutqcjedocrjvjwavcwgzzzayiqypw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759325221.7757943-821-82461061833368/AnsiballZ_edpm_container_manage.py'
Oct 01 13:27:02 compute-0 sudo[239526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:02 compute-0 ceph-mon[74802]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:02 compute-0 python3[239528]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 13:27:03 compute-0 ceph-mon[74802]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:05 compute-0 ceph-mon[74802]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:07 compute-0 podman[239542]: 2025-10-01 13:27:07.396198318 +0000 UTC m=+4.594907541 image pull 80aeb93432d60c5f52c5325081f51dbf5658fe1615083ed284852e8f6df43250 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22
Oct 01 13:27:07 compute-0 podman[239600]: 2025-10-01 13:27:07.608432445 +0000 UTC m=+0.045941990 image pull 80aeb93432d60c5f52c5325081f51dbf5658fe1615083ed284852e8f6df43250 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22
Oct 01 13:27:08 compute-0 ceph-mon[74802]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:08 compute-0 podman[239600]: 2025-10-01 13:27:08.505912387 +0000 UTC m=+0.943421902 container create a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 01 13:27:08 compute-0 python3[239528]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22
Oct 01 13:27:08 compute-0 sudo[239526]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:09 compute-0 sudo[239789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meqtpeneadmxtleplmfayzftanyzirij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325228.9099207-829-109269945334031/AnsiballZ_stat.py'
Oct 01 13:27:09 compute-0 sudo[239789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:09 compute-0 python3.9[239791]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:27:09 compute-0 sudo[239789]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:10 compute-0 ceph-mon[74802]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:10 compute-0 sudo[239956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llkidnidtbdapzxhwmpmzouzilwurjir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325230.1166315-838-200531648882155/AnsiballZ_file.py'
Oct 01 13:27:10 compute-0 sudo[239956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:10 compute-0 podman[239917]: 2025-10-01 13:27:10.525083885 +0000 UTC m=+0.088168503 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct 01 13:27:10 compute-0 python3.9[239965]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:27:10 compute-0 sudo[239956]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:11 compute-0 sudo[240040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldcbtxtusrbfswoephmvwmnxpeuiaxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325230.1166315-838-200531648882155/AnsiballZ_stat.py'
Oct 01 13:27:11 compute-0 sudo[240040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:11 compute-0 ceph-mon[74802]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:11 compute-0 python3.9[240042]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:27:11 compute-0 sudo[240040]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:12 compute-0 sudo[240193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhdusepatocdrkiqzddrmhtewhpfbvjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325231.5211542-838-266239700217891/AnsiballZ_copy.py'
Oct 01 13:27:12 compute-0 sudo[240193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:12 compute-0 python3.9[240195]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759325231.5211542-838-266239700217891/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:27:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:27:12.293 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:27:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:27:12.295 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:27:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:27:12.295 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:27:12 compute-0 sudo[240193]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:12 compute-0 sudo[240269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhjxrplfbqmdnodcczgxweyzgasqrgim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325231.5211542-838-266239700217891/AnsiballZ_systemd.py'
Oct 01 13:27:12 compute-0 sudo[240269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:13 compute-0 python3.9[240271]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 13:27:13 compute-0 systemd[1]: Reloading.
Oct 01 13:27:13 compute-0 systemd-rc-local-generator[240294]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:27:13 compute-0 systemd-sysv-generator[240298]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:27:13 compute-0 ceph-mon[74802]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:13 compute-0 sudo[240269]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:13 compute-0 sudo[240379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mowgexrugcictzjhexboifamrgzehhjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325231.5211542-838-266239700217891/AnsiballZ_systemd.py'
Oct 01 13:27:13 compute-0 sudo[240379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:13 compute-0 sshd-session[240119]: Invalid user backupuser from 27.254.137.144 port 57586
Oct 01 13:27:13 compute-0 sshd-session[240119]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:27:13 compute-0 sshd-session[240119]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:27:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:14 compute-0 python3.9[240381]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:27:14 compute-0 systemd[1]: Reloading.
Oct 01 13:27:14 compute-0 systemd-rc-local-generator[240435]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:27:14 compute-0 systemd-sysv-generator[240440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:27:14 compute-0 podman[240384]: 2025-10-01 13:27:14.476915483 +0000 UTC m=+0.190484422 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller)
Oct 01 13:27:14 compute-0 unix_chkpwd[240445]: password check failed for user (root)
Oct 01 13:27:14 compute-0 sshd-session[240383]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139  user=root
Oct 01 13:27:14 compute-0 systemd[1]: Starting multipathd container...
Oct 01 13:27:15 compute-0 sshd-session[240119]: Failed password for invalid user backupuser from 27.254.137.144 port 57586 ssh2
Oct 01 13:27:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:16 compute-0 sshd-session[240383]: Failed password for root from 200.7.101.139 port 50458 ssh2
Oct 01 13:27:16 compute-0 ceph-mon[74802]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:17 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1.
Oct 01 13:27:17 compute-0 sshd-session[240119]: Received disconnect from 27.254.137.144 port 57586:11: Bye Bye [preauth]
Oct 01 13:27:17 compute-0 sshd-session[240119]: Disconnected from invalid user backupuser 27.254.137.144 port 57586 [preauth]
Oct 01 13:27:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:27:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:27:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:27:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:27:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:27:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:27:18 compute-0 podman[240448]: 2025-10-01 13:27:18.188185207 +0000 UTC m=+3.337724178 container init a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:27:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:18 compute-0 multipathd[240463]: + sudo -E kolla_set_configs
Oct 01 13:27:18 compute-0 podman[240448]: 2025-10-01 13:27:18.230900125 +0000 UTC m=+3.380439036 container start a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:27:18 compute-0 sudo[240479]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 01 13:27:18 compute-0 sudo[240479]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 13:27:18 compute-0 sudo[240479]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 13:27:18 compute-0 multipathd[240463]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 13:27:18 compute-0 multipathd[240463]: INFO:__main__:Validating config file
Oct 01 13:27:18 compute-0 multipathd[240463]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 13:27:18 compute-0 multipathd[240463]: INFO:__main__:Writing out command to execute
Oct 01 13:27:18 compute-0 sudo[240479]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:18 compute-0 multipathd[240463]: ++ cat /run_command
Oct 01 13:27:18 compute-0 multipathd[240463]: + CMD='/usr/sbin/multipathd -d'
Oct 01 13:27:18 compute-0 multipathd[240463]: + ARGS=
Oct 01 13:27:18 compute-0 multipathd[240463]: + sudo kolla_copy_cacerts
Oct 01 13:27:18 compute-0 sudo[240494]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 01 13:27:18 compute-0 sudo[240494]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 13:27:18 compute-0 sudo[240494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 13:27:18 compute-0 sudo[240494]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:18 compute-0 multipathd[240463]: + [[ ! -n '' ]]
Oct 01 13:27:18 compute-0 multipathd[240463]: + . kolla_extend_start
Oct 01 13:27:18 compute-0 multipathd[240463]: Running command: '/usr/sbin/multipathd -d'
Oct 01 13:27:18 compute-0 multipathd[240463]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 01 13:27:18 compute-0 multipathd[240463]: + umask 0022
Oct 01 13:27:18 compute-0 multipathd[240463]: + exec /usr/sbin/multipathd -d
Oct 01 13:27:18 compute-0 multipathd[240463]: 7799.120582 | --------start up--------
Oct 01 13:27:18 compute-0 multipathd[240463]: 7799.120609 | read /etc/multipath.conf
Oct 01 13:27:18 compute-0 multipathd[240463]: 7799.130860 | path checkers start up
Oct 01 13:27:18 compute-0 sshd-session[240383]: Received disconnect from 200.7.101.139 port 50458:11: Bye Bye [preauth]
Oct 01 13:27:18 compute-0 sshd-session[240383]: Disconnected from authenticating user root 200.7.101.139 port 50458 [preauth]
Oct 01 13:27:19 compute-0 podman[240448]: multipathd
Oct 01 13:27:19 compute-0 systemd[1]: Started multipathd container.
Oct 01 13:27:19 compute-0 sudo[240379]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:19 compute-0 podman[240480]: 2025-10-01 13:27:19.166685506 +0000 UTC m=+0.916715260 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 13:27:19 compute-0 podman[240466]: 2025-10-01 13:27:19.203863438 +0000 UTC m=+2.109071445 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:27:19 compute-0 ceph-mon[74802]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:20 compute-0 python3.9[240671]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:27:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:20 compute-0 sudo[240823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulwzymvhcbuojnjcdydcgvitfnjaiepa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325240.2454932-874-5458571773635/AnsiballZ_command.py'
Oct 01 13:27:20 compute-0 sudo[240823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:20 compute-0 ceph-mon[74802]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:20 compute-0 python3.9[240825]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:27:20 compute-0 sudo[240823]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:21 compute-0 sudo[240955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:27:21 compute-0 sudo[240955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:21 compute-0 sudo[240955]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:21 compute-0 sudo[241031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfxdeufclnufachriuxxlkfpxsiuaesi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325241.1656451-882-197548951232972/AnsiballZ_systemd.py'
Oct 01 13:27:21 compute-0 sudo[241031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:21 compute-0 sudo[241002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:27:21 compute-0 sudo[241002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:21 compute-0 sudo[241002]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:21 compute-0 sudo[241041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:27:21 compute-0 sudo[241041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:21 compute-0 sudo[241041]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:21 compute-0 sudo[241066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:27:21 compute-0 sudo[241066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:22 compute-0 ceph-mon[74802]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:22 compute-0 python3.9[241038]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:27:22 compute-0 systemd[1]: Stopping multipathd container...
Oct 01 13:27:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:22 compute-0 sudo[241066]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:27:22 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:27:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:27:22 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:27:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:27:22 compute-0 multipathd[240463]: 7803.717939 | exit (signal)
Oct 01 13:27:22 compute-0 multipathd[240463]: 7803.718037 | --------shut down-------
Oct 01 13:27:23 compute-0 systemd[1]: libpod-a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1.scope: Deactivated successfully.
Oct 01 13:27:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:27:23 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 96fd9f62-7e58-4662-92c6-b76ee4c603ac does not exist
Oct 01 13:27:23 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2cf771a6-62c0-4b4e-bba2-793946cc3209 does not exist
Oct 01 13:27:23 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5c4e4c87-d69b-4bcf-b76c-c081f9eba861 does not exist
Oct 01 13:27:23 compute-0 podman[241112]: 2025-10-01 13:27:23.029801412 +0000 UTC m=+0.810365804 container died a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250923, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 13:27:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:27:23 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:27:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:27:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:27:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:27:23 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:27:23 compute-0 sudo[241152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:27:23 compute-0 ceph-mon[74802]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:27:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:27:23 compute-0 sudo[241152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:23 compute-0 sudo[241152]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:23 compute-0 sudo[241177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:27:23 compute-0 sudo[241177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:23 compute-0 sudo[241177]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:23 compute-0 sudo[241202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:27:23 compute-0 sudo[241202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:23 compute-0 sudo[241202]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:23 compute-0 systemd[1]: a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1-79bbfdc24022dde.timer: Deactivated successfully.
Oct 01 13:27:23 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1.
Oct 01 13:27:23 compute-0 sudo[241230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:27:23 compute-0 sudo[241230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed-merged.mount: Deactivated successfully.
Oct 01 13:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1-userdata-shm.mount: Deactivated successfully.
Oct 01 13:27:23 compute-0 sshd-session[241210]: Invalid user ts3server from 156.236.31.46 port 45124
Oct 01 13:27:23 compute-0 sshd-session[241210]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:27:23 compute-0 sshd-session[241210]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46
Oct 01 13:27:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:27:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:27:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:27:24 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:27:24 compute-0 podman[241112]: 2025-10-01 13:27:24.334392229 +0000 UTC m=+2.114956661 container cleanup a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 01 13:27:24 compute-0 podman[241112]: multipathd
Oct 01 13:27:24 compute-0 podman[241282]: multipathd
Oct 01 13:27:24 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 01 13:27:24 compute-0 systemd[1]: Stopped multipathd container.
Oct 01 13:27:24 compute-0 systemd[1]: Starting multipathd container...
Oct 01 13:27:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98b8d3ca354493903085781d259891c9a516ea186720f0f87b701a39b7916ed/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:24 compute-0 podman[241317]: 2025-10-01 13:27:24.740794484 +0000 UTC m=+0.214221221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:27:24 compute-0 podman[241317]: 2025-10-01 13:27:24.867723779 +0000 UTC m=+0.341150456 container create 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:27:24 compute-0 systemd[1]: Started libpod-conmon-252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35.scope.
Oct 01 13:27:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:27:25 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1.
Oct 01 13:27:25 compute-0 podman[241317]: 2025-10-01 13:27:25.140297151 +0000 UTC m=+0.613723798 container init 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:27:25 compute-0 podman[241317]: 2025-10-01 13:27:25.149672877 +0000 UTC m=+0.623099524 container start 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:27:25 compute-0 systemd[1]: libpod-252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35.scope: Deactivated successfully.
Oct 01 13:27:25 compute-0 upbeat_kowalevski[241344]: 167 167
Oct 01 13:27:25 compute-0 conmon[241344]: conmon 252ecce36cea1ebba698 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35.scope/container/memory.events
Oct 01 13:27:25 compute-0 podman[241317]: 2025-10-01 13:27:25.246905265 +0000 UTC m=+0.720332002 container attach 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:27:25 compute-0 podman[241317]: 2025-10-01 13:27:25.247656279 +0000 UTC m=+0.721082966 container died 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 13:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0db6d8eb8494bc85ed162e6f996377c018e9de3936776670b59378a0339a79c-merged.mount: Deactivated successfully.
Oct 01 13:27:25 compute-0 ceph-mon[74802]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:25 compute-0 sshd-session[241210]: Failed password for invalid user ts3server from 156.236.31.46 port 45124 ssh2
Oct 01 13:27:26 compute-0 podman[241317]: 2025-10-01 13:27:26.168677134 +0000 UTC m=+1.642103791 container remove 252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:27:26 compute-0 systemd[1]: libpod-conmon-252ecce36cea1ebba698875fe70fe83a9e86befaf7ddde370e495040b11e4b35.scope: Deactivated successfully.
Oct 01 13:27:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:26 compute-0 sshd-session[241210]: Received disconnect from 156.236.31.46 port 45124:11: Bye Bye [preauth]
Oct 01 13:27:26 compute-0 sshd-session[241210]: Disconnected from invalid user ts3server 156.236.31.46 port 45124 [preauth]
Oct 01 13:27:26 compute-0 podman[241296]: 2025-10-01 13:27:26.513545556 +0000 UTC m=+2.044709955 container init a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct 01 13:27:26 compute-0 multipathd[241336]: + sudo -E kolla_set_configs
Oct 01 13:27:26 compute-0 podman[241296]: 2025-10-01 13:27:26.558582067 +0000 UTC m=+2.089746416 container start a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 01 13:27:26 compute-0 sudo[241383]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 01 13:27:26 compute-0 sudo[241383]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 13:27:26 compute-0 sudo[241383]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 13:27:26 compute-0 multipathd[241336]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 13:27:26 compute-0 multipathd[241336]: INFO:__main__:Validating config file
Oct 01 13:27:26 compute-0 multipathd[241336]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 13:27:26 compute-0 multipathd[241336]: INFO:__main__:Writing out command to execute
Oct 01 13:27:26 compute-0 sudo[241383]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:26 compute-0 multipathd[241336]: ++ cat /run_command
Oct 01 13:27:26 compute-0 multipathd[241336]: + CMD='/usr/sbin/multipathd -d'
Oct 01 13:27:26 compute-0 multipathd[241336]: + ARGS=
Oct 01 13:27:26 compute-0 multipathd[241336]: + sudo kolla_copy_cacerts
Oct 01 13:27:26 compute-0 sudo[241399]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 01 13:27:26 compute-0 podman[241369]: 2025-10-01 13:27:26.566659042 +0000 UTC m=+0.209957407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:27:26 compute-0 sudo[241399]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 01 13:27:26 compute-0 sudo[241399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 01 13:27:26 compute-0 sudo[241399]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:26 compute-0 multipathd[241336]: + [[ ! -n '' ]]
Oct 01 13:27:26 compute-0 multipathd[241336]: + . kolla_extend_start
Oct 01 13:27:26 compute-0 multipathd[241336]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 01 13:27:26 compute-0 multipathd[241336]: Running command: '/usr/sbin/multipathd -d'
Oct 01 13:27:26 compute-0 multipathd[241336]: + umask 0022
Oct 01 13:27:26 compute-0 multipathd[241336]: + exec /usr/sbin/multipathd -d
Oct 01 13:27:26 compute-0 multipathd[241336]: 7807.418257 | --------start up--------
Oct 01 13:27:26 compute-0 multipathd[241336]: 7807.418282 | read /etc/multipath.conf
Oct 01 13:27:26 compute-0 multipathd[241336]: 7807.424693 | path checkers start up
Oct 01 13:27:26 compute-0 podman[241296]: multipathd
Oct 01 13:27:26 compute-0 systemd[1]: Started multipathd container.
Oct 01 13:27:26 compute-0 sudo[241031]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:26 compute-0 podman[241369]: 2025-10-01 13:27:26.872787373 +0000 UTC m=+0.516085658 container create 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:27:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:27 compute-0 systemd[1]: Started libpod-conmon-1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba.scope.
Oct 01 13:27:27 compute-0 podman[241384]: 2025-10-01 13:27:27.022860678 +0000 UTC m=+0.450251159 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true)
Oct 01 13:27:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:27 compute-0 podman[241369]: 2025-10-01 13:27:27.182130995 +0000 UTC m=+0.825429310 container init 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:27:27 compute-0 podman[241369]: 2025-10-01 13:27:27.19436512 +0000 UTC m=+0.837663395 container start 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:27:27 compute-0 sudo[241573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfuyixvglkhqrnltfmzsxvracixapqgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325246.9304874-890-24902565819040/AnsiballZ_file.py'
Oct 01 13:27:27 compute-0 sudo[241573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:27 compute-0 podman[241369]: 2025-10-01 13:27:27.314844362 +0000 UTC m=+0.958142677 container attach 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:27:27 compute-0 python3.9[241576]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:27:27 compute-0 sudo[241573]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:28 compute-0 ceph-mon[74802]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:28 compute-0 sudo[241743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxitdpshwqahrotidfjwroemxwxbygqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325247.8176606-902-239047218982239/AnsiballZ_file.py'
Oct 01 13:27:28 compute-0 sudo[241743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:28 compute-0 laughing_merkle[241496]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:27:28 compute-0 laughing_merkle[241496]: --> relative data size: 1.0
Oct 01 13:27:28 compute-0 laughing_merkle[241496]: --> All data devices are unavailable
Oct 01 13:27:28 compute-0 systemd[1]: libpod-1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba.scope: Deactivated successfully.
Oct 01 13:27:28 compute-0 systemd[1]: libpod-1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba.scope: Consumed 1.044s CPU time.
Oct 01 13:27:28 compute-0 podman[241369]: 2025-10-01 13:27:28.297932726 +0000 UTC m=+1.941231041 container died 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:27:28 compute-0 python3.9[241747]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 01 13:27:28 compute-0 sudo[241743]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b48c8fc53f1eaa0ac065e3fe6a536a09b05facb74555858205b75e0c8ef1a92-merged.mount: Deactivated successfully.
Oct 01 13:27:29 compute-0 sudo[241913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abtzqyhytrhrjyqxdlspompdawiadvct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325248.658248-910-26267148382689/AnsiballZ_modprobe.py'
Oct 01 13:27:29 compute-0 sudo[241913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:29 compute-0 python3.9[241915]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 01 13:27:29 compute-0 kernel: Key type psk registered
Oct 01 13:27:29 compute-0 sudo[241913]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:29 compute-0 podman[241369]: 2025-10-01 13:27:29.526944069 +0000 UTC m=+3.170242354 container remove 1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:27:29 compute-0 systemd[1]: libpod-conmon-1f9105534096ab21c810b799d552dd2d0ac28afe12185c57709718fa4efb25ba.scope: Deactivated successfully.
Oct 01 13:27:29 compute-0 sudo[241230]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:29 compute-0 sudo[241949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:27:29 compute-0 sudo[241949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:29 compute-0 sudo[241949]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:29 compute-0 ceph-mon[74802]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:29 compute-0 sudo[241995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:27:29 compute-0 sudo[241995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:29 compute-0 sudo[241995]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:29 compute-0 sudo[242051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:27:29 compute-0 sudo[242051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:29 compute-0 sudo[242051]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:29 compute-0 sudo[242086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:27:29 compute-0 sudo[242086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:30 compute-0 sudo[242185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsijuepqkmpreccymhmwlzpzvmrxhfcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325249.6917362-918-175546479693161/AnsiballZ_stat.py'
Oct 01 13:27:30 compute-0 sudo[242185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:30 compute-0 python3.9[242189]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:27:30 compute-0 sudo[242185]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:30 compute-0 podman[242216]: 2025-10-01 13:27:30.375021731 +0000 UTC m=+0.121311749 container create 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:27:30 compute-0 podman[242216]: 2025-10-01 13:27:30.279026952 +0000 UTC m=+0.025316960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:27:30 compute-0 systemd[1]: Started libpod-conmon-2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11.scope.
Oct 01 13:27:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:27:30 compute-0 podman[242216]: 2025-10-01 13:27:30.661986978 +0000 UTC m=+0.408276976 container init 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:27:30 compute-0 podman[242216]: 2025-10-01 13:27:30.67316588 +0000 UTC m=+0.419455888 container start 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:27:30 compute-0 determined_faraday[242268]: 167 167
Oct 01 13:27:30 compute-0 systemd[1]: libpod-2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11.scope: Deactivated successfully.
Oct 01 13:27:30 compute-0 sudo[242363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvodwdhqcviautjjykqhkjehrkcatewc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325249.6917362-918-175546479693161/AnsiballZ_copy.py'
Oct 01 13:27:30 compute-0 sudo[242363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:30 compute-0 podman[242216]: 2025-10-01 13:27:30.820175559 +0000 UTC m=+0.566465537 container attach 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 13:27:30 compute-0 podman[242216]: 2025-10-01 13:27:30.820545091 +0000 UTC m=+0.566835079 container died 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:27:30 compute-0 python3.9[242371]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759325249.6917362-918-175546479693161/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:27:30 compute-0 sudo[242363]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae2eabcfbedb149c11cc71650d0014a38388fd139ec9a93f17721a485049e915-merged.mount: Deactivated successfully.
Oct 01 13:27:31 compute-0 sudo[242522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmabommngmtfeipbygroaqmrjfovhefa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325251.212894-934-13322449391139/AnsiballZ_lineinfile.py'
Oct 01 13:27:31 compute-0 sudo[242522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:31 compute-0 podman[242216]: 2025-10-01 13:27:31.816277352 +0000 UTC m=+1.562567330 container remove 2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:27:31 compute-0 systemd[1]: libpod-conmon-2abef67041cc18928b0eb360c7620f09f7b289ee830dadb476df5ff244bb1e11.scope: Deactivated successfully.
Oct 01 13:27:31 compute-0 python3.9[242524]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:27:31 compute-0 sudo[242522]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:32 compute-0 podman[242556]: 2025-10-01 13:27:32.017564724 +0000 UTC m=+0.033272771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:27:32 compute-0 ceph-mon[74802]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:32 compute-0 podman[242556]: 2025-10-01 13:27:32.227461528 +0000 UTC m=+0.243169555 container create b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 13:27:32 compute-0 systemd[1]: Started libpod-conmon-b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201.scope.
Oct 01 13:27:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:32 compute-0 sudo[242701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxbnoouxubfisszzhwabkkongxpknaih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325252.091595-942-26130729908809/AnsiballZ_systemd.py'
Oct 01 13:27:32 compute-0 sudo[242701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:32 compute-0 podman[242556]: 2025-10-01 13:27:32.532466263 +0000 UTC m=+0.548174360 container init b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:27:32 compute-0 podman[242556]: 2025-10-01 13:27:32.543027497 +0000 UTC m=+0.558735554 container start b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:27:32 compute-0 python3.9[242703]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:27:32 compute-0 podman[242556]: 2025-10-01 13:27:32.823995813 +0000 UTC m=+0.839703900 container attach b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:27:32 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 01 13:27:32 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 01 13:27:32 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 01 13:27:32 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 01 13:27:32 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 01 13:27:32 compute-0 sudo[242701]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]: {
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:     "0": [
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:         {
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "devices": [
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "/dev/loop3"
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             ],
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_name": "ceph_lv0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_size": "21470642176",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "name": "ceph_lv0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "tags": {
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.cluster_name": "ceph",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.crush_device_class": "",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.encrypted": "0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.osd_id": "0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.type": "block",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.vdo": "0"
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             },
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "type": "block",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "vg_name": "ceph_vg0"
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:         }
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:     ],
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:     "1": [
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:         {
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "devices": [
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "/dev/loop4"
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             ],
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_name": "ceph_lv1",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_size": "21470642176",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "name": "ceph_lv1",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "tags": {
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.cluster_name": "ceph",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.crush_device_class": "",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.encrypted": "0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.osd_id": "1",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.type": "block",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.vdo": "0"
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             },
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "type": "block",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "vg_name": "ceph_vg1"
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:         }
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:     ],
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:     "2": [
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:         {
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "devices": [
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "/dev/loop5"
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             ],
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_name": "ceph_lv2",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_size": "21470642176",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "name": "ceph_lv2",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "tags": {
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.cluster_name": "ceph",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.crush_device_class": "",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.encrypted": "0",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.osd_id": "2",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.type": "block",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:                 "ceph.vdo": "0"
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             },
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "type": "block",
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:             "vg_name": "ceph_vg2"
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:         }
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]:     ]
Oct 01 13:27:33 compute-0 nice_ramanujan[242652]: }
Oct 01 13:27:33 compute-0 systemd[1]: libpod-b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201.scope: Deactivated successfully.
Oct 01 13:27:33 compute-0 podman[242556]: 2025-10-01 13:27:33.424028968 +0000 UTC m=+1.439737075 container died b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:27:33 compute-0 sudo[242873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqygfyrcncymsxhqbnkpaalzkccxqkdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325253.2161796-950-138300692144904/AnsiballZ_setup.py'
Oct 01 13:27:33 compute-0 sudo[242873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:33 compute-0 ceph-mon[74802]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:33 compute-0 python3.9[242875]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 01 13:27:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4c4145ab8a282655ab34b4a44c853e581154a1116527ea892d98ae30b74a06e-merged.mount: Deactivated successfully.
Oct 01 13:27:34 compute-0 sudo[242873]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:34 compute-0 podman[242556]: 2025-10-01 13:27:34.848983065 +0000 UTC m=+2.864691092 container remove b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:27:34 compute-0 systemd[1]: libpod-conmon-b960dfc4921ff8b2d5b7c04d97b6ad7517656d56b4c8518aee7ac580148f5201.scope: Deactivated successfully.
Oct 01 13:27:34 compute-0 sudo[242086]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:34 compute-0 sudo[242958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzjdcwygzpkhfmsbqkkculvekfeaukby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325253.2161796-950-138300692144904/AnsiballZ_dnf.py'
Oct 01 13:27:34 compute-0 sudo[242958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:34 compute-0 sudo[242960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:27:34 compute-0 sudo[242960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:35 compute-0 sudo[242960]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:35 compute-0 sudo[242986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:27:35 compute-0 sudo[242986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:35 compute-0 sudo[242986]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:35 compute-0 python3.9[242961]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 01 13:27:35 compute-0 sudo[243011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:27:35 compute-0 sudo[243011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:35 compute-0 sudo[243011]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:35 compute-0 sudo[243037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:27:35 compute-0 sudo[243037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:35 compute-0 ceph-mon[74802]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:35 compute-0 podman[243102]: 2025-10-01 13:27:35.75641194 +0000 UTC m=+0.041279513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:27:35 compute-0 podman[243102]: 2025-10-01 13:27:35.92340703 +0000 UTC m=+0.208274583 container create fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:27:36 compute-0 systemd[1]: Started libpod-conmon-fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48.scope.
Oct 01 13:27:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:27:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:36 compute-0 podman[243102]: 2025-10-01 13:27:36.321146381 +0000 UTC m=+0.606014024 container init fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:27:36 compute-0 podman[243102]: 2025-10-01 13:27:36.335828785 +0000 UTC m=+0.620696378 container start fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:27:36 compute-0 optimistic_jennings[243119]: 167 167
Oct 01 13:27:36 compute-0 systemd[1]: libpod-fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48.scope: Deactivated successfully.
Oct 01 13:27:36 compute-0 podman[243102]: 2025-10-01 13:27:36.477724942 +0000 UTC m=+0.762592585 container attach fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:27:36 compute-0 podman[243102]: 2025-10-01 13:27:36.478409574 +0000 UTC m=+0.763277157 container died fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:27:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7059930d084f7d456ca38a3adfb1d49810e4a0bfe063d0818dc6a263ffb4d496-merged.mount: Deactivated successfully.
Oct 01 13:27:38 compute-0 ceph-mon[74802]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:38 compute-0 podman[243102]: 2025-10-01 13:27:38.792656874 +0000 UTC m=+3.077524477 container remove fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jennings, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:27:38 compute-0 systemd[1]: libpod-conmon-fb1760c0e97684e48950d4552da1c337e5d2a19f1edb3b89deee22d3d290ee48.scope: Deactivated successfully.
Oct 01 13:27:39 compute-0 podman[243145]: 2025-10-01 13:27:38.995070602 +0000 UTC m=+0.034512411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:27:39 compute-0 podman[243145]: 2025-10-01 13:27:39.55811662 +0000 UTC m=+0.597558329 container create 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:27:39 compute-0 ceph-mon[74802]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:40 compute-0 systemd[1]: Started libpod-conmon-7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189.scope.
Oct 01 13:27:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:27:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:40 compute-0 podman[243145]: 2025-10-01 13:27:40.248791375 +0000 UTC m=+1.288233124 container init 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:27:40 compute-0 podman[243145]: 2025-10-01 13:27:40.259446801 +0000 UTC m=+1.298888550 container start 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:27:40 compute-0 podman[243145]: 2025-10-01 13:27:40.434062381 +0000 UTC m=+1.473504110 container attach 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:27:41 compute-0 hungry_kirch[243162]: {
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "osd_id": 0,
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "type": "bluestore"
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:     },
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "osd_id": 2,
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "type": "bluestore"
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:     },
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "osd_id": 1,
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:         "type": "bluestore"
Oct 01 13:27:41 compute-0 hungry_kirch[243162]:     }
Oct 01 13:27:41 compute-0 hungry_kirch[243162]: }
Oct 01 13:27:41 compute-0 systemd[1]: libpod-7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189.scope: Deactivated successfully.
Oct 01 13:27:41 compute-0 podman[243145]: 2025-10-01 13:27:41.451748136 +0000 UTC m=+2.491189845 container died 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:27:41 compute-0 systemd[1]: libpod-7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189.scope: Consumed 1.174s CPU time.
Oct 01 13:27:41 compute-0 ceph-mon[74802]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-86784d6e31b8b7cca36cfc5e62fed4a29633d7fb5c84147656b1317611448634-merged.mount: Deactivated successfully.
Oct 01 13:27:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:44 compute-0 ceph-mon[74802]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:45 compute-0 podman[243145]: 2025-10-01 13:27:45.461884492 +0000 UTC m=+6.501326241 container remove 7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:27:45 compute-0 podman[243195]: 2025-10-01 13:27:45.47100187 +0000 UTC m=+4.008228877 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct 01 13:27:45 compute-0 systemd[1]: libpod-conmon-7bdc62931a74d617f1b95e5b5583c76ad3799f7d61f208cc5fde84687ce38189.scope: Deactivated successfully.
Oct 01 13:27:45 compute-0 sudo[243037]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:45 compute-0 podman[243233]: 2025-10-01 13:27:45.578628826 +0000 UTC m=+0.126641057 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:27:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:27:45 compute-0 systemd[1]: Reloading.
Oct 01 13:27:45 compute-0 systemd-rc-local-generator[243287]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:27:45 compute-0 systemd-sysv-generator[243294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:27:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:46 compute-0 systemd[1]: Reloading.
Oct 01 13:27:46 compute-0 systemd-sysv-generator[243326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:27:46 compute-0 systemd-rc-local-generator[243323]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:27:46 compute-0 ceph-mon[74802]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:46 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:27:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:27:46 compute-0 systemd-logind[818]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 01 13:27:46 compute-0 systemd-logind[818]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 01 13:27:47 compute-0 lvm[243368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 13:27:47 compute-0 lvm[243367]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 13:27:47 compute-0 lvm[243367]: VG ceph_vg2 finished
Oct 01 13:27:47 compute-0 lvm[243368]: VG ceph_vg1 finished
Oct 01 13:27:47 compute-0 lvm[243369]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 13:27:47 compute-0 lvm[243369]: VG ceph_vg0 finished
Oct 01 13:27:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 26b6f65d-c9ab-4f5e-a751-8c9f852fcf5a does not exist
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev b440f93d-9367-4f24-9f39-53f508b8887f does not exist
Oct 01 13:27:47 compute-0 sudo[243390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:27:47 compute-0 sudo[243390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:47 compute-0 sudo[243390]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:47 compute-0 sudo[243416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:27:47 compute-0 sudo[243416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:27:47 compute-0 sudo[243416]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 01 13:27:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 01 13:27:47 compute-0 systemd[1]: Reloading.
Oct 01 13:27:47 compute-0 systemd-rc-local-generator[243473]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:27:47 compute-0 systemd-sysv-generator[243476]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:27:47
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms', 'backups', '.rgw.root', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:27:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:27:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:27:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:49 compute-0 ceph-mon[74802]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:27:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:27:49 compute-0 podman[243571]: 2025-10-01 13:27:49.510931527 +0000 UTC m=+0.072266512 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:27:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:50 compute-0 ceph-mon[74802]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:50 compute-0 sudo[242958]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:51 compute-0 sudo[244782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxuakbpubuuuggwsguryrvoosvlvktqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325271.0805767-962-254182309926016/AnsiballZ_file.py'
Oct 01 13:27:51 compute-0 sudo[244782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:51 compute-0 python3.9[244784]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:27:51 compute-0 sudo[244782]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:51 compute-0 ceph-mon[74802]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:52 compute-0 python3.9[244934]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 01 13:27:53 compute-0 sshd-session[244939]: Invalid user michelle from 80.253.31.232 port 47774
Oct 01 13:27:53 compute-0 sshd-session[244939]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:27:53 compute-0 sshd-session[244939]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:27:53 compute-0 ceph-mon[74802]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:53 compute-0 sudo[245090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqipstmsnjacyccfcgyazkkitgiqldrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325273.2270515-980-78799279912127/AnsiballZ_file.py'
Oct 01 13:27:53 compute-0 sudo[245090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:53 compute-0 python3.9[245092]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:27:53 compute-0 sudo[245090]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:55 compute-0 sudo[245242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrfjecoqxskrmbnjriedkvxwoqhrhloh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325274.3209202-991-46751889126679/AnsiballZ_systemd_service.py'
Oct 01 13:27:55 compute-0 sudo[245242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:27:55 compute-0 ceph-mon[74802]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:55 compute-0 python3.9[245244]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 13:27:55 compute-0 systemd[1]: Reloading.
Oct 01 13:27:55 compute-0 systemd-rc-local-generator[245272]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:27:55 compute-0 systemd-sysv-generator[245276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:27:55 compute-0 sudo[245242]: pam_unix(sudo:session): session closed for user root
Oct 01 13:27:55 compute-0 sshd-session[244939]: Failed password for invalid user michelle from 80.253.31.232 port 47774 ssh2
Oct 01 13:27:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:56 compute-0 python3.9[245429]: ansible-ansible.builtin.service_facts Invoked
Oct 01 13:27:56 compute-0 network[245446]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 01 13:27:56 compute-0 network[245447]: 'network-scripts' will be removed from distribution in near future.
Oct 01 13:27:56 compute-0 network[245448]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 01 13:27:56 compute-0 sshd-session[244939]: Received disconnect from 80.253.31.232 port 47774:11: Bye Bye [preauth]
Oct 01 13:27:56 compute-0 sshd-session[244939]: Disconnected from invalid user michelle 80.253.31.232 port 47774 [preauth]
Oct 01 13:27:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:27:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:27:57 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct 01 13:27:57 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:57.815192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:27:57 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct 01 13:27:57 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325277815244, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1392, "num_deletes": 506, "total_data_size": 1747996, "memory_usage": 1774944, "flush_reason": "Manual Compaction"}
Oct 01 13:27:57 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct 01 13:27:57 compute-0 podman[245454]: 2025-10-01 13:27:57.875328958 +0000 UTC m=+0.133890606 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, container_name=multipathd)
Oct 01 13:27:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:58 compute-0 sshd-session[245428]: error: kex_exchange_identification: read: Connection reset by peer
Oct 01 13:27:58 compute-0 sshd-session[245428]: Connection reset by 118.41.20.173 port 51110
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325278431180, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1720871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13599, "largest_seqno": 14990, "table_properties": {"data_size": 1714771, "index_size": 2919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 15195, "raw_average_key_size": 18, "raw_value_size": 1700576, "raw_average_value_size": 2019, "num_data_blocks": 134, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325161, "oldest_key_time": 1759325161, "file_creation_time": 1759325277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 616150 microseconds, and 8431 cpu microseconds.
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:27:58 compute-0 ceph-mon[74802]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.431342) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1720871 bytes OK
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.431397) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.754389) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.754457) EVENT_LOG_v1 {"time_micros": 1759325278754440, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.754497) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1740722, prev total WAL file size 1741877, number of live WAL files 2.
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.756600) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1680KB)], [32(7490KB)]
Oct 01 13:27:58 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325278756664, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9391339, "oldest_snapshot_seqno": -1}
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3848 keys, 7449934 bytes, temperature: kUnknown
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325279120359, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7449934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7421909, "index_size": 17291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94173, "raw_average_key_size": 24, "raw_value_size": 7349979, "raw_average_value_size": 1910, "num_data_blocks": 732, "num_entries": 3848, "num_filter_entries": 3848, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.120825) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7449934 bytes
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.938090) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.8 rd, 20.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.3 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(9.8) write-amplify(4.3) OK, records in: 4873, records dropped: 1025 output_compression: NoCompression
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.938160) EVENT_LOG_v1 {"time_micros": 1759325279938133, "job": 14, "event": "compaction_finished", "compaction_time_micros": 363843, "compaction_time_cpu_micros": 36062, "output_level": 6, "num_output_files": 1, "total_output_size": 7449934, "num_input_records": 4873, "num_output_records": 3848, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325279939412, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325279942644, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:58.756431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:27:59 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:27:59.942799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:28:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:28:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3309 writes, 14K keys, 3309 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3308 writes, 3308 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1274 writes, 5793 keys, 1274 commit groups, 1.0 writes per commit group, ingest: 8.49 MB, 0.01 MB/s
                                           Interval WAL: 1273 writes, 1273 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     16.6      0.96              0.06         7    0.138       0      0       0.0       0.0
                                             L6      1/0    7.10 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     65.6     54.1      0.78              0.15         6    0.129     24K   3202       0.0       0.0
                                            Sum      1/0    7.10 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6     29.3     33.3      1.74              0.20        13    0.134     24K   3202       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     26.3     26.5      1.34              0.13         8    0.167     17K   2469       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     65.6     54.1      0.78              0.15         6    0.129     24K   3202       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     16.7      0.95              0.06         6    0.159       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 1.7 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 308.00 MB usage: 1.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(103,1.43 MB,0.463624%) FilterBlock(14,75.80 KB,0.0240326%) IndexBlock(14,153.55 KB,0.0486845%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 13:28:00 compute-0 ceph-mon[74802]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:01 compute-0 ceph-mon[74802]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 01 13:28:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 01 13:28:01 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.357s CPU time.
Oct 01 13:28:01 compute-0 systemd[1]: run-r2af6d32c1b43432bb467b78951e1f15e.service: Deactivated successfully.
Oct 01 13:28:02 compute-0 sudo[245744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqceepmoozxwdornadwflummhoublbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325281.6258457-1010-97903652687967/AnsiballZ_systemd_service.py'
Oct 01 13:28:02 compute-0 sudo[245744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:02 compute-0 python3.9[245746]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:28:02 compute-0 sudo[245744]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:02 compute-0 sudo[245897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkcqmqetmleaqbndlmsaxgqqtzwlgfjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325282.53832-1010-131481053926445/AnsiballZ_systemd_service.py'
Oct 01 13:28:02 compute-0 sudo[245897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:03 compute-0 python3.9[245899]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:28:03 compute-0 sudo[245897]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:03 compute-0 sudo[246050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izhhzzdwrufxfqsyaiiwtvanjfkihmfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325283.445145-1010-209264767796555/AnsiballZ_systemd_service.py'
Oct 01 13:28:03 compute-0 sudo[246050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:03 compute-0 ceph-mon[74802]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:04 compute-0 python3.9[246052]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:28:04 compute-0 sudo[246050]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:04 compute-0 sudo[246203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piqgowrmihtotozjslabdhkvbarayafm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325284.3145807-1010-271266368670168/AnsiballZ_systemd_service.py'
Oct 01 13:28:04 compute-0 sudo[246203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:04 compute-0 python3.9[246205]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:28:05 compute-0 sudo[246203]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:05 compute-0 ceph-mon[74802]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:05 compute-0 sudo[246356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khqccxenzykqwuvfnypyxzjlhwxozlug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325285.1558506-1010-191128274448673/AnsiballZ_systemd_service.py'
Oct 01 13:28:05 compute-0 sudo[246356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:05 compute-0 python3.9[246358]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:28:05 compute-0 sudo[246356]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:06 compute-0 sudo[246509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvbztrztcgaimfjhhvjhmhyotqbeciaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325286.0009158-1010-151242902173197/AnsiballZ_systemd_service.py'
Oct 01 13:28:06 compute-0 sudo[246509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:06 compute-0 python3.9[246511]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:28:06 compute-0 sudo[246509]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:07 compute-0 sudo[246662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahkivsyrcmmpywebkcgorwdqpwirisan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325286.8971393-1010-45547359196966/AnsiballZ_systemd_service.py'
Oct 01 13:28:07 compute-0 sudo[246662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:07 compute-0 python3.9[246664]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:28:07 compute-0 sudo[246662]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:08 compute-0 ceph-mon[74802]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:08 compute-0 sudo[246815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covwaaczjlfzalnoppasmwczykvmafin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325287.9456434-1010-95071987166418/AnsiballZ_systemd_service.py'
Oct 01 13:28:08 compute-0 sudo[246815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:08 compute-0 python3.9[246817]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:28:08 compute-0 sudo[246815]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:09 compute-0 ceph-mon[74802]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:09 compute-0 sudo[246968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gytvmdfwqtgfgjijqewxjstjhnuhpvmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325289.0049276-1069-25779011556573/AnsiballZ_file.py'
Oct 01 13:28:09 compute-0 sudo[246968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:09 compute-0 python3.9[246970]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:09 compute-0 sudo[246968]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:10 compute-0 sudo[247120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blwoyrmkgdovecdlrmqgxahjurpkfqec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325289.7817352-1069-98740931261268/AnsiballZ_file.py'
Oct 01 13:28:10 compute-0 sudo[247120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:10 compute-0 python3.9[247122]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:10 compute-0 sudo[247120]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:10 compute-0 sudo[247272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uomtfriuotwjraftrnbtilpvlatpsoua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325290.4414246-1069-220729106080039/AnsiballZ_file.py'
Oct 01 13:28:10 compute-0 sudo[247272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:11 compute-0 python3.9[247274]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:11 compute-0 sudo[247272]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:11 compute-0 sudo[247424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozdbeinofufejrfixnzxxmusfusqouxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325291.1864586-1069-265353954747591/AnsiballZ_file.py'
Oct 01 13:28:11 compute-0 sudo[247424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:11 compute-0 python3.9[247426]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:11 compute-0 ceph-mon[74802]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:11 compute-0 sudo[247424]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:28:12.294 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:28:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:28:12.296 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:28:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:28:12.297 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:28:12 compute-0 sudo[247576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjnhfbgagjpvvhjtthbuqrsfnlnwwraf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325291.9973626-1069-178719306963542/AnsiballZ_file.py'
Oct 01 13:28:12 compute-0 sudo[247576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:12 compute-0 python3.9[247578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:12 compute-0 sudo[247576]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:12 compute-0 sudo[247728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pplwxkhkrvcsaolmebolyvpjmuzpnkff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325292.7400134-1069-35308613301804/AnsiballZ_file.py'
Oct 01 13:28:12 compute-0 sudo[247728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:13 compute-0 ceph-mon[74802]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:13 compute-0 python3.9[247730]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:13 compute-0 sudo[247728]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:13 compute-0 sudo[247880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-megkdtdlaxhsrygyaeusjigwycwmuevj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325293.441333-1069-203219555157713/AnsiballZ_file.py'
Oct 01 13:28:13 compute-0 sudo[247880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:13 compute-0 python3.9[247882]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:14 compute-0 sudo[247880]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:14 compute-0 sudo[248032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqcfwkpyyoyvgbvtsomrdtojpzngjnnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325294.138416-1069-236645517601241/AnsiballZ_file.py'
Oct 01 13:28:14 compute-0 sudo[248032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:14 compute-0 python3.9[248034]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:14 compute-0 sudo[248032]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:15 compute-0 sudo[248184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjceivquebvzcgqoxiijdbqlvghvgkng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325294.8518627-1126-248302708017853/AnsiballZ_file.py'
Oct 01 13:28:15 compute-0 sudo[248184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:15 compute-0 ceph-mon[74802]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:15 compute-0 python3.9[248186]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:15 compute-0 sudo[248184]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:16 compute-0 sudo[248366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpwtueimhvxvnqxbjpcgcbkurodyybzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325295.803678-1126-68641359668079/AnsiballZ_file.py'
Oct 01 13:28:16 compute-0 sudo[248366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:16 compute-0 podman[248311]: 2025-10-01 13:28:16.20811734 +0000 UTC m=+0.097353574 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct 01 13:28:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:16 compute-0 podman[248310]: 2025-10-01 13:28:16.237864679 +0000 UTC m=+0.135600151 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct 01 13:28:16 compute-0 python3.9[248374]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:16 compute-0 sudo[248366]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:16 compute-0 sudo[248535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlvgskmrecqfoyvdhraqntyviqldxbvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325296.581209-1126-73481537035857/AnsiballZ_file.py'
Oct 01 13:28:16 compute-0 sudo[248535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:17 compute-0 python3.9[248537]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:17 compute-0 sudo[248535]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:17 compute-0 sudo[248687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twemuevqsyehjxpwsgojgcyqfffqsffh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325297.3593063-1126-215737349716183/AnsiballZ_file.py'
Oct 01 13:28:17 compute-0 sudo[248687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:28:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:28:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:28:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:28:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:28:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:28:17 compute-0 python3.9[248689]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:17 compute-0 sudo[248687]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:18 compute-0 ceph-mon[74802]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:18 compute-0 sudo[248839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glfyqvdwiccinetuwgbfbqhfshsypcib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325298.1449895-1126-204035351407878/AnsiballZ_file.py'
Oct 01 13:28:18 compute-0 sudo[248839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:18 compute-0 python3.9[248841]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:18 compute-0 sudo[248839]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:19 compute-0 sudo[248991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mldnirzbcdfektycfmhqxevllxpxbvwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325298.9327416-1126-251786043152354/AnsiballZ_file.py'
Oct 01 13:28:19 compute-0 sudo[248991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:19 compute-0 ceph-mon[74802]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:19 compute-0 python3.9[248993]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:19 compute-0 sudo[248991]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:20 compute-0 sudo[249158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuqqyzoeraehkwptofeyhohrsjholoyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325299.685245-1126-24123358073884/AnsiballZ_file.py'
Oct 01 13:28:20 compute-0 sudo[249158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:20 compute-0 podman[249117]: 2025-10-01 13:28:20.027818956 +0000 UTC m=+0.074671797 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct 01 13:28:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:20 compute-0 python3.9[249164]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:20 compute-0 sudo[249158]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:20 compute-0 sudo[249315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohhsencruzklwzebpoiggnxlnlgrqcgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325300.4035523-1126-82612729959526/AnsiballZ_file.py'
Oct 01 13:28:20 compute-0 sudo[249315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:20 compute-0 python3.9[249317]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:20 compute-0 sudo[249315]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:21 compute-0 sudo[249467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etnlgwcilztygidlupcegfnovpazitsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325301.2353706-1184-3796291681496/AnsiballZ_command.py'
Oct 01 13:28:21 compute-0 sudo[249467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:21 compute-0 python3.9[249469]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:28:21 compute-0 ceph-mon[74802]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:21 compute-0 sudo[249467]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:22 compute-0 python3.9[249621]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 01 13:28:22 compute-0 ceph-mon[74802]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:23 compute-0 sudo[249771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdmlqjibzlndrpbwjddumwmbmwlregxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325303.0750542-1202-56667488288577/AnsiballZ_systemd_service.py'
Oct 01 13:28:23 compute-0 sudo[249771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:23 compute-0 python3.9[249773]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 13:28:23 compute-0 systemd[1]: Reloading.
Oct 01 13:28:23 compute-0 systemd-sysv-generator[249802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:28:23 compute-0 systemd-rc-local-generator[249798]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:28:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:24 compute-0 sudo[249771]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:24 compute-0 sudo[249958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfxtjpiwulfjbgwagtxhysbnvkxdlomu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325304.6725202-1210-165528606046140/AnsiballZ_command.py'
Oct 01 13:28:24 compute-0 sudo[249958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:25 compute-0 python3.9[249960]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:28:25 compute-0 sudo[249958]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:25 compute-0 ceph-mon[74802]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:25 compute-0 sudo[250111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjhhzvqiyxwwubgzyhsggftaefprmbqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325305.4782133-1210-219870034461995/AnsiballZ_command.py'
Oct 01 13:28:25 compute-0 sudo[250111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:26 compute-0 python3.9[250113]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:28:26 compute-0 sudo[250111]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:26 compute-0 sudo[250266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdxolmptbceljtwbyugcueyknyhsjvuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325306.3500288-1210-200194245384541/AnsiballZ_command.py'
Oct 01 13:28:26 compute-0 sudo[250266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:26 compute-0 python3.9[250268]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:28:26 compute-0 sudo[250266]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:27 compute-0 sudo[250419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onhenvxmaybgbaqbddxjpbxiknxhgtgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325307.026734-1210-253042165579575/AnsiballZ_command.py'
Oct 01 13:28:27 compute-0 sudo[250419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:27 compute-0 python3.9[250421]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:28:27 compute-0 sudo[250419]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:27 compute-0 ceph-mon[74802]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:28 compute-0 sudo[250589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbuzbuedhhdthfndqtywhurvxpfskdzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325307.696468-1210-167676038602382/AnsiballZ_command.py'
Oct 01 13:28:28 compute-0 sudo[250589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:28 compute-0 podman[250546]: 2025-10-01 13:28:28.070790213 +0000 UTC m=+0.094682359 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 13:28:28 compute-0 unix_chkpwd[250595]: password check failed for user (root)
Oct 01 13:28:28 compute-0 sshd-session[250114]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144  user=root
Oct 01 13:28:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:28 compute-0 python3.9[250594]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:28:28 compute-0 sudo[250589]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:28 compute-0 sudo[250746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvceqrgrfeuydhrepzmfwigazgdizmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325308.4964802-1210-101331484674628/AnsiballZ_command.py'
Oct 01 13:28:28 compute-0 sudo[250746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:29 compute-0 python3.9[250748]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:28:29 compute-0 sudo[250746]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:29 compute-0 sudo[250901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adpvtzrkcbyjyroxayhydwngthrfplmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325309.2362373-1210-209673761259527/AnsiballZ_command.py'
Oct 01 13:28:29 compute-0 sudo[250901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:29 compute-0 python3.9[250903]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:28:29 compute-0 sudo[250901]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:29 compute-0 ceph-mon[74802]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:30 compute-0 sshd-session[250877]: Invalid user git from 200.7.101.139 port 51698
Oct 01 13:28:30 compute-0 sshd-session[250877]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:28:30 compute-0 sshd-session[250877]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139
Oct 01 13:28:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:30 compute-0 sshd-session[250114]: Failed password for root from 27.254.137.144 port 53230 ssh2
Oct 01 13:28:30 compute-0 sudo[251054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lluoqdyzgvjoqifofdyyxowhbgeewvnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325310.0308526-1210-178820726789515/AnsiballZ_command.py'
Oct 01 13:28:30 compute-0 sudo[251054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:30 compute-0 python3.9[251056]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 01 13:28:30 compute-0 sudo[251054]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:31 compute-0 ceph-mon[74802]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:31 compute-0 sudo[251209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqtetatiismtijsdmkzuzlltigcjutvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325311.518764-1289-129334828951403/AnsiballZ_file.py'
Oct 01 13:28:31 compute-0 sudo[251209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:31 compute-0 unix_chkpwd[251212]: password check failed for user (root)
Oct 01 13:28:31 compute-0 sshd-session[251082]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46  user=root
Oct 01 13:28:32 compute-0 python3.9[251211]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:32 compute-0 sshd-session[250114]: Received disconnect from 27.254.137.144 port 53230:11: Bye Bye [preauth]
Oct 01 13:28:32 compute-0 sshd-session[250114]: Disconnected from authenticating user root 27.254.137.144 port 53230 [preauth]
Oct 01 13:28:32 compute-0 sudo[251209]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:32 compute-0 sshd-session[250877]: Failed password for invalid user git from 200.7.101.139 port 51698 ssh2
Oct 01 13:28:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:32 compute-0 sudo[251362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scbpjupyopqiuiqhowormmtltxyctueh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325312.2381294-1289-105850603417234/AnsiballZ_file.py'
Oct 01 13:28:32 compute-0 sudo[251362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:32 compute-0 sshd-session[250877]: Received disconnect from 200.7.101.139 port 51698:11: Bye Bye [preauth]
Oct 01 13:28:32 compute-0 sshd-session[250877]: Disconnected from invalid user git 200.7.101.139 port 51698 [preauth]
Oct 01 13:28:32 compute-0 python3.9[251364]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:32 compute-0 sudo[251362]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:33 compute-0 sudo[251514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crifhfurifmwgleedzjmcbxavhvrebvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325312.9782155-1289-245266654530571/AnsiballZ_file.py'
Oct 01 13:28:33 compute-0 sudo[251514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:33 compute-0 python3.9[251516]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:33 compute-0 sudo[251514]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:33 compute-0 ceph-mon[74802]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:33 compute-0 sshd-session[251082]: Failed password for root from 156.236.31.46 port 45212 ssh2
Oct 01 13:28:33 compute-0 sshd-session[251082]: Received disconnect from 156.236.31.46 port 45212:11: Bye Bye [preauth]
Oct 01 13:28:33 compute-0 sshd-session[251082]: Disconnected from authenticating user root 156.236.31.46 port 45212 [preauth]
Oct 01 13:28:33 compute-0 sudo[251666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iobmtoelgdokfmkcyrjicavgiigjjklb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325313.6730385-1311-8324409509776/AnsiballZ_file.py'
Oct 01 13:28:33 compute-0 sudo[251666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:34 compute-0 python3.9[251668]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:34 compute-0 sudo[251666]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:34 compute-0 sudo[251818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efzpwxmycajtzzgkxcpkxxtdalefzrmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325314.336433-1311-12330350284079/AnsiballZ_file.py'
Oct 01 13:28:34 compute-0 sudo[251818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:34 compute-0 python3.9[251820]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:34 compute-0 sudo[251818]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:35 compute-0 sudo[251970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvypuhvkzyntlpecjkudyqthmdiiwacy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325314.9782195-1311-144373783895390/AnsiballZ_file.py'
Oct 01 13:28:35 compute-0 sudo[251970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:35 compute-0 python3.9[251972]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:35 compute-0 sudo[251970]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:35 compute-0 ceph-mon[74802]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:35 compute-0 sudo[252122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoldlvrlvzutimkymzuhlnhxqowpxtfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325315.623104-1311-13457774625857/AnsiballZ_file.py'
Oct 01 13:28:35 compute-0 sudo[252122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:36 compute-0 python3.9[252124]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:36 compute-0 sudo[252122]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:36 compute-0 sudo[252274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtxyqqkmfnzenlyugkvkjdhlyvvtxlnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325316.382134-1311-253098572815994/AnsiballZ_file.py'
Oct 01 13:28:36 compute-0 sudo[252274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:36 compute-0 python3.9[252276]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:37 compute-0 sudo[252274]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:37 compute-0 sudo[252426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcbssxlbrmwxytakyubxzrrbmblpdopc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325317.1419108-1311-157896491198636/AnsiballZ_file.py'
Oct 01 13:28:37 compute-0 sudo[252426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:37 compute-0 python3.9[252428]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:37 compute-0 sudo[252426]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:37 compute-0 ceph-mon[74802]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:38 compute-0 sudo[252578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erexvrkbfbexeswqgqrqzrclpycqkewv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325317.8776057-1311-200767847309112/AnsiballZ_file.py'
Oct 01 13:28:38 compute-0 sudo[252578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:38 compute-0 python3.9[252580]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:38 compute-0 sudo[252578]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:39 compute-0 ceph-mon[74802]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:39 compute-0 sudo[252730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxkihjxmbalfettrxvjpnlndkpnkcdwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325318.697934-1311-200232402175689/AnsiballZ_file.py'
Oct 01 13:28:39 compute-0 sudo[252730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:39 compute-0 python3.9[252732]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:39 compute-0 sudo[252730]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:39 compute-0 sudo[252882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmpjlcgpfdpsbdkzfhjzjkiymwfgosqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325319.4895432-1311-216588271659520/AnsiballZ_file.py'
Oct 01 13:28:39 compute-0 sudo[252882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:40 compute-0 python3.9[252884]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:40 compute-0 sudo[252882]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:41 compute-0 ceph-mon[74802]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:43 compute-0 ceph-mon[74802]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:45 compute-0 ceph-mon[74802]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:45 compute-0 sudo[253034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sawulwqepnqjuctjcqutctawgkpqoxyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325325.3789618-1514-242862721558231/AnsiballZ_getent.py'
Oct 01 13:28:45 compute-0 sudo[253034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:46 compute-0 python3.9[253036]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 01 13:28:46 compute-0 sudo[253034]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:46 compute-0 podman[253115]: 2025-10-01 13:28:46.503639862 +0000 UTC m=+0.062336549 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:28:46 compute-0 podman[253114]: 2025-10-01 13:28:46.535198858 +0000 UTC m=+0.090354403 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:28:46 compute-0 sudo[253232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricoezgnczzfxeueqzomrnkzgesnofgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325326.3002264-1522-160322770307492/AnsiballZ_group.py'
Oct 01 13:28:46 compute-0 sudo[253232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:46 compute-0 python3.9[253234]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 01 13:28:47 compute-0 groupadd[253235]: group added to /etc/group: name=nova, GID=42436
Oct 01 13:28:47 compute-0 groupadd[253235]: group added to /etc/gshadow: name=nova
Oct 01 13:28:47 compute-0 groupadd[253235]: new group: name=nova, GID=42436
Oct 01 13:28:47 compute-0 ceph-mon[74802]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:47 compute-0 sudo[253232]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:47 compute-0 sudo[253247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:28:47 compute-0 sudo[253247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:47 compute-0 sudo[253247]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:47 compute-0 sudo[253290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:28:47 compute-0 sudo[253290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:47 compute-0 sudo[253290]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:47 compute-0 sudo[253328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:28:47 compute-0 sudo[253328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:47 compute-0 sudo[253328]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:47 compute-0 sudo[253386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:28:47 compute-0 sudo[253386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:28:47
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:28:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:28:48 compute-0 sudo[253507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhducsbfubotnjesibnpwljszdfwjfdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325327.586877-1530-247629045957692/AnsiballZ_user.py'
Oct 01 13:28:48 compute-0 sudo[253507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:48 compute-0 sudo[253386]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:28:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:28:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:28:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:28:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:28:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:28:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f885fb8e-2889-4555-8a1b-3b5c6ef15cd0 does not exist
Oct 01 13:28:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 769e54c4-3fbd-4576-b0c6-8108455212ed does not exist
Oct 01 13:28:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e4e3553f-373a-4037-9450-28f808e9a779 does not exist
Oct 01 13:28:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:28:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:28:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:28:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:28:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:28:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:28:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:28:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:28:48 compute-0 sudo[253524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:28:48 compute-0 sudo[253524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:48 compute-0 sudo[253524]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:48 compute-0 python3.9[253509]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 01 13:28:48 compute-0 sudo[253550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:28:48 compute-0 sudo[253550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:48 compute-0 sudo[253550]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:48 compute-0 sudo[253576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:28:48 compute-0 sudo[253576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:48 compute-0 sudo[253576]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:48 compute-0 sudo[253601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:28:48 compute-0 sudo[253601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:48 compute-0 useradd[253552]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 01 13:28:48 compute-0 useradd[253552]: add 'nova' to group 'libvirt'
Oct 01 13:28:48 compute-0 useradd[253552]: add 'nova' to shadow group 'libvirt'
Oct 01 13:28:49 compute-0 podman[253666]: 2025-10-01 13:28:49.042300354 +0000 UTC m=+0.038036551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:28:49 compute-0 podman[253666]: 2025-10-01 13:28:49.269524074 +0000 UTC m=+0.265260261 container create 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 13:28:49 compute-0 systemd[1]: Started libpod-conmon-185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf.scope.
Oct 01 13:28:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:28:49 compute-0 sudo[253507]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:49 compute-0 ceph-mon[74802]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:28:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:28:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:28:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:28:49 compute-0 podman[253666]: 2025-10-01 13:28:49.504365895 +0000 UTC m=+0.500102092 container init 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:28:49 compute-0 podman[253666]: 2025-10-01 13:28:49.516212689 +0000 UTC m=+0.511948856 container start 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:28:49 compute-0 nifty_zhukovsky[253688]: 167 167
Oct 01 13:28:49 compute-0 systemd[1]: libpod-185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf.scope: Deactivated successfully.
Oct 01 13:28:49 compute-0 podman[253666]: 2025-10-01 13:28:49.527742923 +0000 UTC m=+0.523479120 container attach 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:28:49 compute-0 podman[253666]: 2025-10-01 13:28:49.52859693 +0000 UTC m=+0.524333137 container died 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d73d573269a7ac64ec863e0c77b4908e0e11bef786f0dc0e01ebc2e6a8ad706-merged.mount: Deactivated successfully.
Oct 01 13:28:49 compute-0 podman[253666]: 2025-10-01 13:28:49.906887837 +0000 UTC m=+0.902624054 container remove 185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 13:28:49 compute-0 systemd[1]: libpod-conmon-185bf4459f7396c18eab3d12e691d4bb21c4703d5b85134a3e698c8e7263cbdf.scope: Deactivated successfully.
Oct 01 13:28:50 compute-0 podman[253738]: 2025-10-01 13:28:50.182858506 +0000 UTC m=+0.110142427 container create 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 01 13:28:50 compute-0 podman[253738]: 2025-10-01 13:28:50.112038681 +0000 UTC m=+0.039322582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:28:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:50 compute-0 systemd[1]: Started libpod-conmon-93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c.scope.
Oct 01 13:28:50 compute-0 podman[253753]: 2025-10-01 13:28:50.295046417 +0000 UTC m=+0.065453517 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 01 13:28:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:50 compute-0 sshd-session[253756]: Accepted publickey for zuul from 192.168.122.30 port 38726 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 13:28:50 compute-0 systemd-logind[818]: New session 52 of user zuul.
Oct 01 13:28:50 compute-0 systemd[1]: Started Session 52 of User zuul.
Oct 01 13:28:50 compute-0 sshd-session[253756]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 13:28:50 compute-0 podman[253738]: 2025-10-01 13:28:50.37724152 +0000 UTC m=+0.304525501 container init 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:28:50 compute-0 podman[253738]: 2025-10-01 13:28:50.393166623 +0000 UTC m=+0.320450514 container start 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:28:50 compute-0 podman[253738]: 2025-10-01 13:28:50.416081707 +0000 UTC m=+0.343365688 container attach 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:28:50 compute-0 sshd-session[253781]: Received disconnect from 192.168.122.30 port 38726:11: disconnected by user
Oct 01 13:28:50 compute-0 sshd-session[253781]: Disconnected from user zuul 192.168.122.30 port 38726
Oct 01 13:28:50 compute-0 sshd-session[253756]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:28:50 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Oct 01 13:28:50 compute-0 systemd-logind[818]: Session 52 logged out. Waiting for processes to exit.
Oct 01 13:28:50 compute-0 systemd-logind[818]: Removed session 52.
Oct 01 13:28:51 compute-0 python3.9[253934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:28:51 compute-0 ceph-mon[74802]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:51 compute-0 ecstatic_lichterman[253772]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:28:51 compute-0 ecstatic_lichterman[253772]: --> relative data size: 1.0
Oct 01 13:28:51 compute-0 ecstatic_lichterman[253772]: --> All data devices are unavailable
Oct 01 13:28:51 compute-0 systemd[1]: libpod-93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c.scope: Deactivated successfully.
Oct 01 13:28:51 compute-0 systemd[1]: libpod-93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c.scope: Consumed 1.094s CPU time.
Oct 01 13:28:51 compute-0 podman[253738]: 2025-10-01 13:28:51.555275596 +0000 UTC m=+1.482559477 container died 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 13:28:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-66a22eb5c9299826b2c7e189c276a56ac18c843663b67956b7825d1cfe46213a-merged.mount: Deactivated successfully.
Oct 01 13:28:51 compute-0 podman[253738]: 2025-10-01 13:28:51.711613169 +0000 UTC m=+1.638897080 container remove 93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:28:51 compute-0 systemd[1]: libpod-conmon-93c44d1347374ea1426de3743ee199475edf12ca3fc8c9dce09a8cef2ae7805c.scope: Deactivated successfully.
Oct 01 13:28:51 compute-0 sudo[253601]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:51 compute-0 sudo[254091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:28:51 compute-0 sudo[254091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:51 compute-0 sudo[254091]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:51 compute-0 python3.9[254090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325330.701829-1555-93289603244759/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:51 compute-0 sudo[254116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:28:51 compute-0 sudo[254116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:51 compute-0 sudo[254116]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:52 compute-0 sudo[254141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:28:52 compute-0 sudo[254141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:52 compute-0 sudo[254141]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:52 compute-0 sudo[254166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:28:52 compute-0 sudo[254166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:52 compute-0 podman[254307]: 2025-10-01 13:28:52.549105898 +0000 UTC m=+0.073988697 container create a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:28:52 compute-0 systemd[1]: Started libpod-conmon-a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2.scope.
Oct 01 13:28:52 compute-0 podman[254307]: 2025-10-01 13:28:52.518040987 +0000 UTC m=+0.042923846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:28:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:28:52 compute-0 podman[254307]: 2025-10-01 13:28:52.671499289 +0000 UTC m=+0.196382208 container init a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:28:52 compute-0 podman[254307]: 2025-10-01 13:28:52.68767806 +0000 UTC m=+0.212560839 container start a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct 01 13:28:52 compute-0 quirky_volhard[254354]: 167 167
Oct 01 13:28:52 compute-0 systemd[1]: libpod-a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2.scope: Deactivated successfully.
Oct 01 13:28:52 compute-0 podman[254307]: 2025-10-01 13:28:52.69656203 +0000 UTC m=+0.221444839 container attach a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:28:52 compute-0 conmon[254354]: conmon a8823a491bb992fd2e9f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2.scope/container/memory.events
Oct 01 13:28:52 compute-0 podman[254307]: 2025-10-01 13:28:52.697759128 +0000 UTC m=+0.222641937 container died a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c390bbfa38880c4b116c87110128ab80569ebbc619767fdc10ffdcb0523c246a-merged.mount: Deactivated successfully.
Oct 01 13:28:52 compute-0 podman[254307]: 2025-10-01 13:28:52.804435163 +0000 UTC m=+0.329317932 container remove a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:28:52 compute-0 systemd[1]: libpod-conmon-a8823a491bb992fd2e9fa68f0badcf30dadae6519e53552df98191578235b3c2.scope: Deactivated successfully.
Oct 01 13:28:52 compute-0 python3.9[254409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:28:53 compute-0 podman[254423]: 2025-10-01 13:28:53.003582808 +0000 UTC m=+0.057748333 container create 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:28:53 compute-0 systemd[1]: Started libpod-conmon-8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406.scope.
Oct 01 13:28:53 compute-0 podman[254423]: 2025-10-01 13:28:52.981182991 +0000 UTC m=+0.035348496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:28:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:53 compute-0 podman[254423]: 2025-10-01 13:28:53.142177172 +0000 UTC m=+0.196342707 container init 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:28:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:53 compute-0 podman[254423]: 2025-10-01 13:28:53.152081544 +0000 UTC m=+0.206247039 container start 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:28:53 compute-0 podman[254423]: 2025-10-01 13:28:53.180032396 +0000 UTC m=+0.234197931 container attach 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 13:28:53 compute-0 ceph-mon[74802]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:53 compute-0 python3.9[254519]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]: {
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:     "0": [
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:         {
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "devices": [
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "/dev/loop3"
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             ],
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_name": "ceph_lv0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_size": "21470642176",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "name": "ceph_lv0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "tags": {
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.cluster_name": "ceph",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.crush_device_class": "",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.encrypted": "0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.osd_id": "0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.type": "block",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.vdo": "0"
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             },
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "type": "block",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "vg_name": "ceph_vg0"
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:         }
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:     ],
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:     "1": [
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:         {
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "devices": [
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "/dev/loop4"
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             ],
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_name": "ceph_lv1",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_size": "21470642176",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "name": "ceph_lv1",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "tags": {
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.cluster_name": "ceph",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.crush_device_class": "",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.encrypted": "0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.osd_id": "1",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.type": "block",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.vdo": "0"
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             },
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "type": "block",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "vg_name": "ceph_vg1"
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:         }
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:     ],
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:     "2": [
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:         {
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "devices": [
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "/dev/loop5"
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             ],
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_name": "ceph_lv2",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_size": "21470642176",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "name": "ceph_lv2",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "tags": {
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.cluster_name": "ceph",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.crush_device_class": "",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.encrypted": "0",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.osd_id": "2",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.type": "block",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:                 "ceph.vdo": "0"
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             },
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "type": "block",
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:             "vg_name": "ceph_vg2"
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:         }
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]:     ]
Oct 01 13:28:53 compute-0 upbeat_jennings[254464]: }
Oct 01 13:28:53 compute-0 systemd[1]: libpod-8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406.scope: Deactivated successfully.
Oct 01 13:28:53 compute-0 podman[254423]: 2025-10-01 13:28:53.959743581 +0000 UTC m=+1.013909066 container died 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-affdf466946453ae1a69ff71d0505fd6470dc6577aa37fb9e02ce8d11dccf571-merged.mount: Deactivated successfully.
Oct 01 13:28:54 compute-0 python3.9[254674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:28:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:54 compute-0 podman[254423]: 2025-10-01 13:28:54.347003931 +0000 UTC m=+1.401169456 container remove 8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:28:54 compute-0 systemd[1]: libpod-conmon-8592ca819f949dc14655a831e1bce5a2e61f4d948c83377370013dd47725e406.scope: Deactivated successfully.
Oct 01 13:28:54 compute-0 sudo[254166]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:54 compute-0 sudo[254732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:28:54 compute-0 sudo[254732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:54 compute-0 sudo[254732]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:54 compute-0 sudo[254781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:28:54 compute-0 sudo[254781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:54 compute-0 sudo[254781]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:54 compute-0 sudo[254831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:28:54 compute-0 sudo[254831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:54 compute-0 sudo[254831]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:54 compute-0 sudo[254882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:28:54 compute-0 sudo[254882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:54 compute-0 python3.9[254879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325333.675824-1555-20186330931562/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:55 compute-0 podman[255023]: 2025-10-01 13:28:55.161275958 +0000 UTC m=+0.030130253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:28:55 compute-0 podman[255023]: 2025-10-01 13:28:55.302527854 +0000 UTC m=+0.171382059 container create 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:28:55 compute-0 systemd[1]: Started libpod-conmon-5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c.scope.
Oct 01 13:28:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:28:55 compute-0 podman[255023]: 2025-10-01 13:28:55.49630613 +0000 UTC m=+0.365160355 container init 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:28:55 compute-0 podman[255023]: 2025-10-01 13:28:55.509911549 +0000 UTC m=+0.378765754 container start 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:28:55 compute-0 crazy_northcutt[255097]: 167 167
Oct 01 13:28:55 compute-0 systemd[1]: libpod-5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c.scope: Deactivated successfully.
Oct 01 13:28:55 compute-0 podman[255023]: 2025-10-01 13:28:55.531295303 +0000 UTC m=+0.400149618 container attach 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:28:55 compute-0 podman[255023]: 2025-10-01 13:28:55.53244048 +0000 UTC m=+0.401294695 container died 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:28:55 compute-0 python3.9[255118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:28:55 compute-0 ceph-mon[74802]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-61fb9e6c9a7e3f5a3c9debf258ed0bfe24414653c95730c2e5be1abbad2daafe-merged.mount: Deactivated successfully.
Oct 01 13:28:55 compute-0 podman[255023]: 2025-10-01 13:28:55.972654552 +0000 UTC m=+0.841508797 container remove 5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:28:56 compute-0 systemd[1]: libpod-conmon-5896bccd0cb6436f55a398255064e9004ee2f735dc395d49ad4eb1351601861c.scope: Deactivated successfully.
Oct 01 13:28:56 compute-0 unix_chkpwd[255250]: password check failed for user (root)
Oct 01 13:28:56 compute-0 sshd-session[255094]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232  user=root
Oct 01 13:28:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:56 compute-0 podman[255263]: 2025-10-01 13:28:56.27968459 +0000 UTC m=+0.115148535 container create 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:28:56 compute-0 podman[255263]: 2025-10-01 13:28:56.199672256 +0000 UTC m=+0.035136301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:28:56 compute-0 python3.9[255257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325335.0316463-1555-270376451294197/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:56 compute-0 systemd[1]: Started libpod-conmon-5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad.scope.
Oct 01 13:28:56 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:28:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:28:56 compute-0 podman[255263]: 2025-10-01 13:28:56.474968643 +0000 UTC m=+0.310432628 container init 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:28:56 compute-0 podman[255263]: 2025-10-01 13:28:56.490160142 +0000 UTC m=+0.325624077 container start 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:28:56 compute-0 podman[255263]: 2025-10-01 13:28:56.531629811 +0000 UTC m=+0.367093846 container attach 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:28:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:28:57 compute-0 python3.9[255433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]: {
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "osd_id": 0,
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "type": "bluestore"
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:     },
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "osd_id": 2,
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "type": "bluestore"
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:     },
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "osd_id": 1,
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:         "type": "bluestore"
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]:     }
Oct 01 13:28:57 compute-0 distracted_stonebraker[255279]: }
Oct 01 13:28:57 compute-0 systemd[1]: libpod-5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad.scope: Deactivated successfully.
Oct 01 13:28:57 compute-0 systemd[1]: libpod-5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad.scope: Consumed 1.126s CPU time.
Oct 01 13:28:57 compute-0 podman[255263]: 2025-10-01 13:28:57.617169517 +0000 UTC m=+1.452633452 container died 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:28:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-572f7192a735e084ae1ac35dda7c8f3b520ba93df22f296745341bf5cce1f4b5-merged.mount: Deactivated successfully.
Oct 01 13:28:57 compute-0 sshd-session[255094]: Failed password for root from 80.253.31.232 port 51932 ssh2
Oct 01 13:28:57 compute-0 python3.9[255582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325336.5853362-1555-25632415394607/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:28:57 compute-0 ceph-mon[74802]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:57 compute-0 podman[255263]: 2025-10-01 13:28:57.982116802 +0000 UTC m=+1.817580777 container remove 5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:28:57 compute-0 systemd[1]: libpod-conmon-5c0b6a6f1825ba46caa6495d2f0ce9ddcdbf460ae4c58fd3ce5e77f175092dad.scope: Deactivated successfully.
Oct 01 13:28:58 compute-0 sshd-session[255094]: Received disconnect from 80.253.31.232 port 51932:11: Bye Bye [preauth]
Oct 01 13:28:58 compute-0 sshd-session[255094]: Disconnected from authenticating user root 80.253.31.232 port 51932 [preauth]
Oct 01 13:28:58 compute-0 sudo[254882]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:28:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:28:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:28:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:28:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:28:58 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c1461ee4-db9d-4426-8327-671a76606b6e does not exist
Oct 01 13:28:58 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a05b3e4-15a1-4f35-8652-22d9598d1eb5 does not exist
Oct 01 13:28:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:58 compute-0 sudo[255673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:28:58 compute-0 sudo[255673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:58 compute-0 sudo[255673]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:58 compute-0 sudo[255728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:28:58 compute-0 sudo[255728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:28:58 compute-0 sudo[255728]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:58 compute-0 podman[255720]: 2025-10-01 13:28:58.351265231 +0000 UTC m=+0.069874143 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:28:58 compute-0 sudo[255813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtdjnvizprfpoxgsusmnmwtlqdunqqtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325338.0845377-1624-261422183998912/AnsiballZ_file.py'
Oct 01 13:28:58 compute-0 sudo[255813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:58 compute-0 python3.9[255815]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:58 compute-0 sudo[255813]: pam_unix(sudo:session): session closed for user root
Oct 01 13:28:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:28:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:28:59 compute-0 ceph-mon[74802]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:28:59 compute-0 sudo[255965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdxenifxowfdzydhjylcinpfuewdmdmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325338.8120916-1632-193284259103219/AnsiballZ_copy.py'
Oct 01 13:28:59 compute-0 sudo[255965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:28:59 compute-0 python3.9[255967]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:28:59 compute-0 sudo[255965]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:00 compute-0 sudo[256117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqwsfnhdvxxnxrfsljriphhwiryfljtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325339.5898833-1640-191277656230603/AnsiballZ_stat.py'
Oct 01 13:29:00 compute-0 sudo[256117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:00 compute-0 python3.9[256119]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:29:00 compute-0 sudo[256117]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:00 compute-0 sudo[256269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opzxxoeeohfsulhwzgloypifwqohamxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325340.5409873-1648-201163619771849/AnsiballZ_stat.py'
Oct 01 13:29:00 compute-0 sudo[256269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:01 compute-0 python3.9[256271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:29:01 compute-0 sudo[256269]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:01 compute-0 ceph-mon[74802]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:01 compute-0 sudo[256392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywzhaquclwfgnstfiutullsfyatjzqpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325340.5409873-1648-201163619771849/AnsiballZ_copy.py'
Oct 01 13:29:01 compute-0 sudo[256392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:01 compute-0 python3.9[256394]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759325340.5409873-1648-201163619771849/.source _original_basename=.ko34wqwa follow=False checksum=f38746d134c75429bccd8dc462ab009d24eaf0f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 01 13:29:01 compute-0 sudo[256392]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:02 compute-0 python3.9[256546]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:29:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:03 compute-0 python3.9[256698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:29:03 compute-0 ceph-mon[74802]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:04 compute-0 python3.9[256819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325342.8639975-1674-272658090814078/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=d51188376d1ee8ea80c2336e6c661b92261c7db6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:29:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:04 compute-0 python3.9[256969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 01 13:29:05 compute-0 python3.9[257090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759325344.2349575-1689-211370419721784/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=b10d7cb8eb77f002035ee20deefa0512667b71ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 01 13:29:05 compute-0 ceph-mon[74802]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:06 compute-0 sudo[257240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccxmubhtkgvgemkghdnjnhbnawngnexn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325345.7339277-1706-224212614375752/AnsiballZ_container_config_data.py'
Oct 01 13:29:06 compute-0 sudo[257240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:06 compute-0 python3.9[257242]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 01 13:29:06 compute-0 sudo[257240]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:06 compute-0 sudo[257392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhceravjjzzbvzjmarjxynldwwlzbwvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325346.5639205-1715-48937787390354/AnsiballZ_container_config_hash.py'
Oct 01 13:29:06 compute-0 sudo[257392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:07 compute-0 python3.9[257394]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 13:29:07 compute-0 sudo[257392]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:07 compute-0 sudo[257544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixwkptcseoheqvconsgbospnxogjdsov ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759325347.4441366-1725-57011279739856/AnsiballZ_edpm_container_manage.py'
Oct 01 13:29:07 compute-0 sudo[257544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:08 compute-0 ceph-mon[74802]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:08 compute-0 python3[257546]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 13:29:09 compute-0 ceph-mon[74802]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:12 compute-0 ceph-mon[74802]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:29:12.295 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:29:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:29:12.296 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:29:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:29:12.296 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:29:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:29:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:29:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:29:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:29:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:29:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:29:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:20 compute-0 ceph-mon[74802]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:22 compute-0 podman[257606]: 2025-10-01 13:29:22.087630269 +0000 UTC m=+4.627943716 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 13:29:22 compute-0 podman[257605]: 2025-10-01 13:29:22.13766807 +0000 UTC m=+4.680143637 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:29:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:22 compute-0 podman[257625]: 2025-10-01 13:29:22.268943544 +0000 UTC m=+1.821464964 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:29:23 compute-0 ceph-mon[74802]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:23 compute-0 ceph-mon[74802]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:23 compute-0 ceph-mon[74802]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:23 compute-0 ceph-mon[74802]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:24 compute-0 ceph-mon[74802]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:25 compute-0 ceph-mon[74802]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:28 compute-0 ceph-mon[74802]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:28 compute-0 podman[257559]: 2025-10-01 13:29:28.515059586 +0000 UTC m=+20.016723120 image pull 613e2b735827096139e990f475c5ac5de0e55d8048941a4521c0c17a4351e975 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c
Oct 01 13:29:28 compute-0 podman[257697]: 2025-10-01 13:29:28.526338615 +0000 UTC m=+0.102660247 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct 01 13:29:28 compute-0 podman[257739]: 2025-10-01 13:29:28.64412166 +0000 UTC m=+0.021756514 image pull 613e2b735827096139e990f475c5ac5de0e55d8048941a4521c0c17a4351e975 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c
Oct 01 13:29:28 compute-0 podman[257739]: 2025-10-01 13:29:28.822500602 +0000 UTC m=+0.200135486 container create ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 01 13:29:28 compute-0 python3[257546]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 01 13:29:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:29 compute-0 sudo[257544]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:29 compute-0 ceph-mon[74802]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:29 compute-0 sudo[257927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvgedhztowymadpneuxensnbqnvtdogn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325369.213862-1733-2716256092553/AnsiballZ_stat.py'
Oct 01 13:29:29 compute-0 sudo[257927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:29 compute-0 python3.9[257929]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:29:29 compute-0 sudo[257927]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:30 compute-0 sudo[258081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzzihibuydbhwzcmkuuelzhnxilkmffb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325370.2121587-1745-48972958620872/AnsiballZ_container_config_data.py'
Oct 01 13:29:30 compute-0 sudo[258081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:30 compute-0 python3.9[258083]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 01 13:29:30 compute-0 sudo[258081]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:31 compute-0 sudo[258233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgpcnzlzfluovmlvuvcaemrekhzawfbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325371.0541432-1754-106618605353206/AnsiballZ_container_config_hash.py'
Oct 01 13:29:31 compute-0 sudo[258233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:31 compute-0 ceph-mon[74802]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:31 compute-0 python3.9[258235]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 01 13:29:31 compute-0 sudo[258233]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:32 compute-0 sudo[258385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywihoxcjpsefctrkigabytqezbqnoiih ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759325372.0483541-1764-94352743750671/AnsiballZ_edpm_container_manage.py'
Oct 01 13:29:32 compute-0 sudo[258385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:32 compute-0 python3[258387]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 01 13:29:32 compute-0 podman[258424]: 2025-10-01 13:29:32.942865306 +0000 UTC m=+0.081768851 container create 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:29:32 compute-0 podman[258424]: 2025-10-01 13:29:32.902948287 +0000 UTC m=+0.041851902 image pull 613e2b735827096139e990f475c5ac5de0e55d8048941a4521c0c17a4351e975 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c
Oct 01 13:29:32 compute-0 python3[258387]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c kolla_start
Oct 01 13:29:33 compute-0 sudo[258385]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:33 compute-0 ceph-mon[74802]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:33 compute-0 sudo[258611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxmbydrhonpwdovmhjpcqpuacanraajl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325373.3424742-1772-96158414896827/AnsiballZ_stat.py'
Oct 01 13:29:33 compute-0 sudo[258611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:33 compute-0 python3.9[258613]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:29:33 compute-0 sudo[258611]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:34 compute-0 sudo[258765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iunxfmigsfimhufpzylkvjzayoxpoomi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325374.2586577-1781-59022886869556/AnsiballZ_file.py'
Oct 01 13:29:34 compute-0 sudo[258765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:35 compute-0 python3.9[258767]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:29:35 compute-0 sudo[258765]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:35 compute-0 sudo[258916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enlkgbpizhooceoyhjgazuvjmnmbezwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325375.1115212-1781-69394282059465/AnsiballZ_copy.py'
Oct 01 13:29:35 compute-0 sudo[258916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:36 compute-0 python3.9[258918]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759325375.1115212-1781-69394282059465/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 01 13:29:36 compute-0 sudo[258916]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:36 compute-0 ceph-mon[74802]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:36 compute-0 sudo[258992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoyekrbagkjhgzunrealjcsaxhobppuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325375.1115212-1781-69394282059465/AnsiballZ_systemd.py'
Oct 01 13:29:36 compute-0 sudo[258992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:36 compute-0 python3.9[258994]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 01 13:29:36 compute-0 systemd[1]: Reloading.
Oct 01 13:29:36 compute-0 systemd-sysv-generator[259024]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:29:36 compute-0 systemd-rc-local-generator[259018]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:29:37 compute-0 sudo[258992]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:37 compute-0 ceph-mon[74802]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:37 compute-0 sudo[259103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylkhrkijwxndwwevdbfalazjnqxrdgdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325375.1115212-1781-69394282059465/AnsiballZ_systemd.py'
Oct 01 13:29:37 compute-0 sudo[259103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:38 compute-0 python3.9[259105]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 01 13:29:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:38 compute-0 systemd[1]: Reloading.
Oct 01 13:29:38 compute-0 systemd-rc-local-generator[259137]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 13:29:38 compute-0 systemd-sysv-generator[259140]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 13:29:38 compute-0 systemd[1]: Starting nova_compute container...
Oct 01 13:29:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:39 compute-0 podman[259147]: 2025-10-01 13:29:39.687340665 +0000 UTC m=+0.926128991 container init 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 01 13:29:39 compute-0 podman[259147]: 2025-10-01 13:29:39.699353767 +0000 UTC m=+0.938142063 container start 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, container_name=nova_compute)
Oct 01 13:29:39 compute-0 nova_compute[259163]: + sudo -E kolla_set_configs
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Validating config file
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying service configuration files
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Deleting /etc/ceph
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Creating directory /etc/ceph
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Writing out command to execute
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 01 13:29:39 compute-0 nova_compute[259163]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 01 13:29:39 compute-0 nova_compute[259163]: ++ cat /run_command
Oct 01 13:29:39 compute-0 nova_compute[259163]: + CMD=nova-compute
Oct 01 13:29:39 compute-0 nova_compute[259163]: + ARGS=
Oct 01 13:29:39 compute-0 nova_compute[259163]: + sudo kolla_copy_cacerts
Oct 01 13:29:39 compute-0 nova_compute[259163]: + [[ ! -n '' ]]
Oct 01 13:29:39 compute-0 nova_compute[259163]: + . kolla_extend_start
Oct 01 13:29:39 compute-0 nova_compute[259163]: Running command: 'nova-compute'
Oct 01 13:29:39 compute-0 nova_compute[259163]: + echo 'Running command: '\''nova-compute'\'''
Oct 01 13:29:39 compute-0 nova_compute[259163]: + umask 0022
Oct 01 13:29:39 compute-0 nova_compute[259163]: + exec nova-compute
Oct 01 13:29:39 compute-0 sshd-session[259166]: Invalid user sujan from 156.236.31.46 port 45304
Oct 01 13:29:39 compute-0 sshd-session[259166]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:29:39 compute-0 sshd-session[259166]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46
Oct 01 13:29:40 compute-0 podman[259147]: nova_compute
Oct 01 13:29:40 compute-0 systemd[1]: Started nova_compute container.
Oct 01 13:29:40 compute-0 sudo[259103]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:40 compute-0 ceph-mon[74802]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:41 compute-0 python3.9[259328]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:29:41 compute-0 ceph-mon[74802]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:41 compute-0 python3.9[259478]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:29:41 compute-0 sshd-session[259166]: Failed password for invalid user sujan from 156.236.31.46 port 45304 ssh2
Oct 01 13:29:42 compute-0 sshd-session[259300]: Invalid user thiago from 27.254.137.144 port 48896
Oct 01 13:29:42 compute-0 sshd-session[259300]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:29:42 compute-0 sshd-session[259300]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:29:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:42 compute-0 python3.9[259628]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 01 13:29:43 compute-0 sudo[259781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igpbgggwmwnsargecpwyzyzkcpiifwpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325383.0075574-1841-52567084276631/AnsiballZ_podman_container.py'
Oct 01 13:29:43 compute-0 sudo[259781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:43 compute-0 python3.9[259783]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 01 13:29:43 compute-0 sshd-session[259757]: Invalid user rmsadm from 200.7.101.139 port 36010
Oct 01 13:29:43 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:29:43 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:29:43 compute-0 sshd-session[259757]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:29:43 compute-0 sshd-session[259757]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139
Oct 01 13:29:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:43 compute-0 sudo[259781]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:43 compute-0 sshd-session[259166]: Received disconnect from 156.236.31.46 port 45304:11: Bye Bye [preauth]
Oct 01 13:29:43 compute-0 sshd-session[259166]: Disconnected from invalid user sujan 156.236.31.46 port 45304 [preauth]
Oct 01 13:29:43 compute-0 ceph-mon[74802]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:44 compute-0 sshd-session[259300]: Failed password for invalid user thiago from 27.254.137.144 port 48896 ssh2
Oct 01 13:29:44 compute-0 sudo[259954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcsqmlzttoxsecriboofakhnrazfwxqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325384.1854894-1849-202142056037518/AnsiballZ_systemd.py'
Oct 01 13:29:44 compute-0 sudo[259954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:45 compute-0 python3.9[259956]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 01 13:29:45 compute-0 nova_compute[259163]: 2025-10-01 13:29:45.209 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 13:29:45 compute-0 nova_compute[259163]: 2025-10-01 13:29:45.211 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 13:29:45 compute-0 nova_compute[259163]: 2025-10-01 13:29:45.211 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 13:29:45 compute-0 nova_compute[259163]: 2025-10-01 13:29:45.211 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 01 13:29:45 compute-0 sshd-session[259757]: Failed password for invalid user rmsadm from 200.7.101.139 port 36010 ssh2
Oct 01 13:29:45 compute-0 nova_compute[259163]: 2025-10-01 13:29:45.420 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:29:45 compute-0 nova_compute[259163]: 2025-10-01 13:29:45.456 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:29:45 compute-0 sshd-session[259300]: Received disconnect from 27.254.137.144 port 48896:11: Bye Bye [preauth]
Oct 01 13:29:45 compute-0 sshd-session[259300]: Disconnected from invalid user thiago 27.254.137.144 port 48896 [preauth]
Oct 01 13:29:45 compute-0 sshd-session[259757]: Received disconnect from 200.7.101.139 port 36010:11: Bye Bye [preauth]
Oct 01 13:29:45 compute-0 sshd-session[259757]: Disconnected from invalid user rmsadm 200.7.101.139 port 36010 [preauth]
Oct 01 13:29:45 compute-0 ceph-mon[74802]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:46 compute-0 systemd[1]: Stopping nova_compute container...
Oct 01 13:29:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:46 compute-0 nova_compute[259163]: 2025-10-01 13:29:46.389 2 INFO nova.virt.driver [None req-16678575-ea8d-4c09-831b-1eb079adc354 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 01 13:29:46 compute-0 systemd[1]: libpod-39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788.scope: Deactivated successfully.
Oct 01 13:29:46 compute-0 systemd[1]: libpod-39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788.scope: Consumed 3.237s CPU time.
Oct 01 13:29:46 compute-0 podman[259964]: 2025-10-01 13:29:46.50118749 +0000 UTC m=+0.401774597 container died 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 13:29:47 compute-0 ceph-mon[74802]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788-userdata-shm.mount: Deactivated successfully.
Oct 01 13:29:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7-merged.mount: Deactivated successfully.
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:29:47
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', 'images', '.rgw.root', 'vms']
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:29:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:29:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:48 compute-0 sshd-session[259106]: error: kex_exchange_identification: read: Connection timed out
Oct 01 13:29:48 compute-0 sshd-session[259106]: banner exchange: Connection from 14.103.127.7 port 44608: Connection timed out
Oct 01 13:29:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:50 compute-0 ceph-mon[74802]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:50 compute-0 podman[259964]: 2025-10-01 13:29:50.349152162 +0000 UTC m=+4.249739269 container cleanup 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct 01 13:29:50 compute-0 podman[259964]: nova_compute
Oct 01 13:29:50 compute-0 podman[259994]: nova_compute
Oct 01 13:29:50 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 01 13:29:50 compute-0 systemd[1]: Stopped nova_compute container.
Oct 01 13:29:50 compute-0 systemd[1]: Starting nova_compute container...
Oct 01 13:29:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36b4df76f67f8995df9a9ad58528ce93ebfe6c9a621c75e85775ef898f8aedc7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:50 compute-0 podman[260007]: 2025-10-01 13:29:50.567881756 +0000 UTC m=+0.110268426 container init 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute)
Oct 01 13:29:50 compute-0 podman[260007]: 2025-10-01 13:29:50.578478484 +0000 UTC m=+0.120865104 container start 39f99f4d0c4445939eb63cc846504c336d153a902938b54ccbf0294d161b2788 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute)
Oct 01 13:29:50 compute-0 podman[260007]: nova_compute
Oct 01 13:29:50 compute-0 nova_compute[260022]: + sudo -E kolla_set_configs
Oct 01 13:29:50 compute-0 systemd[1]: Started nova_compute container.
Oct 01 13:29:50 compute-0 sudo[259954]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Validating config file
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying service configuration files
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Deleting /etc/ceph
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Creating directory /etc/ceph
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Writing out command to execute
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 01 13:29:50 compute-0 nova_compute[260022]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 01 13:29:50 compute-0 nova_compute[260022]: ++ cat /run_command
Oct 01 13:29:50 compute-0 nova_compute[260022]: + CMD=nova-compute
Oct 01 13:29:50 compute-0 nova_compute[260022]: + ARGS=
Oct 01 13:29:50 compute-0 nova_compute[260022]: + sudo kolla_copy_cacerts
Oct 01 13:29:50 compute-0 nova_compute[260022]: + [[ ! -n '' ]]
Oct 01 13:29:50 compute-0 nova_compute[260022]: + . kolla_extend_start
Oct 01 13:29:50 compute-0 nova_compute[260022]: + echo 'Running command: '\''nova-compute'\'''
Oct 01 13:29:50 compute-0 nova_compute[260022]: Running command: 'nova-compute'
Oct 01 13:29:50 compute-0 nova_compute[260022]: + umask 0022
Oct 01 13:29:50 compute-0 nova_compute[260022]: + exec nova-compute
Oct 01 13:29:51 compute-0 sudo[260183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyytycqbytyytqkqautgbnakgytitxps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759325390.8614898-1858-76406999958265/AnsiballZ_podman_container.py'
Oct 01 13:29:51 compute-0 sudo[260183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 13:29:51 compute-0 ceph-mon[74802]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:51 compute-0 python3.9[260185]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 01 13:29:51 compute-0 systemd[1]: Started libpod-conmon-ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01.scope.
Oct 01 13:29:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277db7eddae632687a9c52183566b5484c629e1694122d0786ad06480d633b2d/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277db7eddae632687a9c52183566b5484c629e1694122d0786ad06480d633b2d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277db7eddae632687a9c52183566b5484c629e1694122d0786ad06480d633b2d/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 01 13:29:51 compute-0 podman[260210]: 2025-10-01 13:29:51.788675807 +0000 UTC m=+0.130117148 container init ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, tcib_managed=true, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 13:29:51 compute-0 podman[260210]: 2025-10-01 13:29:51.801620839 +0000 UTC m=+0.143062150 container start ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init)
Oct 01 13:29:51 compute-0 python3.9[260185]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Applying nova statedir ownership
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 01 13:29:51 compute-0 nova_compute_init[260232]: INFO:nova_statedir:Nova statedir ownership complete
Oct 01 13:29:51 compute-0 systemd[1]: libpod-ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01.scope: Deactivated successfully.
Oct 01 13:29:51 compute-0 podman[260233]: 2025-10-01 13:29:51.888280545 +0000 UTC m=+0.047216942 container died ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, managed_by=edpm_ansible)
Oct 01 13:29:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01-userdata-shm.mount: Deactivated successfully.
Oct 01 13:29:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-277db7eddae632687a9c52183566b5484c629e1694122d0786ad06480d633b2d-merged.mount: Deactivated successfully.
Oct 01 13:29:51 compute-0 podman[260244]: 2025-10-01 13:29:51.944055548 +0000 UTC m=+0.063341914 container cleanup ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c, name=nova_compute_init, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:7055e8d7b7d72ce697c6077be14c525c019d186002f04765b90a14c82e01cc7c', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 13:29:51 compute-0 systemd[1]: libpod-conmon-ec6d17001d362aeee60bc4936a9335b1b7d34d04625ebe205bb2a61bd337eb01.scope: Deactivated successfully.
Oct 01 13:29:52 compute-0 sudo[260183]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:52 compute-0 sshd-session[222747]: Connection closed by 192.168.122.30 port 40368
Oct 01 13:29:52 compute-0 sshd-session[222744]: pam_unix(sshd:session): session closed for user zuul
Oct 01 13:29:52 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct 01 13:29:52 compute-0 systemd[1]: session-50.scope: Consumed 3min 18.255s CPU time.
Oct 01 13:29:52 compute-0 systemd-logind[818]: Session 50 logged out. Waiting for processes to exit.
Oct 01 13:29:52 compute-0 systemd-logind[818]: Removed session 50.
Oct 01 13:29:53 compute-0 nova_compute[260022]: 2025-10-01 13:29:53.096 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 13:29:53 compute-0 nova_compute[260022]: 2025-10-01 13:29:53.096 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 13:29:53 compute-0 nova_compute[260022]: 2025-10-01 13:29:53.097 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 01 13:29:53 compute-0 nova_compute[260022]: 2025-10-01 13:29:53.097 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 01 13:29:53 compute-0 nova_compute[260022]: 2025-10-01 13:29:53.317 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:29:53 compute-0 nova_compute[260022]: 2025-10-01 13:29:53.335 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:29:53 compute-0 ceph-mon[74802]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:53 compute-0 nova_compute[260022]: 2025-10-01 13:29:53.863 2 INFO nova.virt.driver [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.010 2 INFO nova.compute.provider_config [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.129 2 DEBUG oslo_concurrency.lockutils [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.130 2 DEBUG oslo_concurrency.lockutils [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.130 2 DEBUG oslo_concurrency.lockutils [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.131 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.131 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.131 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.131 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.132 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.132 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.132 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.132 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.133 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.134 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.135 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.136 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.136 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.136 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.136 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.137 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.138 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.138 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.138 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.138 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.139 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.140 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.140 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.140 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.140 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.141 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.142 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.142 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.142 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.142 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.143 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.144 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.145 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.146 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.147 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.148 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.149 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.150 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.151 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.152 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.152 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.152 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.152 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.153 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.154 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.155 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.156 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.157 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.158 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.159 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.160 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.161 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.162 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.163 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.163 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.163 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.163 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.164 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.165 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.166 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.167 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.168 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.169 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.170 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.171 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.172 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.173 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.174 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.175 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.176 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.177 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.178 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.179 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.180 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.181 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.182 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.183 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.184 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.185 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.186 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.187 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.188 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.189 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.190 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.191 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.192 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.193 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.194 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.195 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.196 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.197 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.198 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.199 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.199 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.199 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.199 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.200 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.201 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.202 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.202 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.202 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.202 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.203 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.204 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.205 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.205 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.205 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.205 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.206 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.207 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.208 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.209 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.210 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.211 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.211 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.211 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.211 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.212 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.213 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.214 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.215 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.216 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.217 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.218 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.219 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.219 2 WARNING oslo_config.cfg [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 01 13:29:54 compute-0 nova_compute[260022]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 01 13:29:54 compute-0 nova_compute[260022]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 01 13:29:54 compute-0 nova_compute[260022]: and ``live_migration_inbound_addr`` respectively.
Oct 01 13:29:54 compute-0 nova_compute[260022]: ).  Its value may be silently ignored in the future.
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.219 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.220 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.221 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.222 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.223 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.223 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.223 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_secret_uuid        = eb4b6ead-01d1-53b3-a52a-47dcc600555f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.223 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.224 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.225 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.226 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.227 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.228 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.229 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.230 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.231 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.232 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.233 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.234 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.235 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.236 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.237 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.238 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.239 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.240 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.241 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.242 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.243 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.244 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.245 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.246 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.247 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.248 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.249 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.250 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.251 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.252 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.253 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.254 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.255 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.256 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.257 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.258 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.259 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.260 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.261 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.262 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.263 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.264 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.265 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.266 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.267 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.268 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.269 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.270 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.271 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.272 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.273 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.274 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.275 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.276 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.277 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.278 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.279 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.280 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.281 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.282 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.283 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.283 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.283 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.283 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.284 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.285 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.286 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.286 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.286 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.286 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.287 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.288 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.289 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.290 2 DEBUG oslo_service.service [None req-ab1ba1ff-a4f0-40f4-8ad7-5d8193c50f17 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.291 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.331 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.332 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.332 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.333 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 01 13:29:54 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 01 13:29:54 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.443 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f7a6a39a8b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.446 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f7a6a39a8b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.447 2 INFO nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Connection event '1' reason 'None'
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.529 2 WARNING nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 01 13:29:54 compute-0 nova_compute[260022]: 2025-10-01 13:29:54.529 2 DEBUG nova.virt.libvirt.volume.mount [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 01 13:29:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.640 2 INFO nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host capabilities <capabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]: 
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <host>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <uuid>adf090e1-fe93-4ff6-a8f5-4224f2f21059</uuid>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <arch>x86_64</arch>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model>EPYC-Rome-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <vendor>AMD</vendor>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <microcode version='16777317'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <signature family='23' model='49' stepping='0'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='x2apic'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='tsc-deadline'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='osxsave'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='hypervisor'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='tsc_adjust'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='spec-ctrl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='stibp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='arch-capabilities'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='cmp_legacy'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='topoext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='virt-ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='lbrv'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='tsc-scale'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='vmcb-clean'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='pause-filter'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='pfthreshold'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='svme-addr-chk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='rdctl-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='skip-l1dfl-vmentry'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='mds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature name='pschange-mc-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <pages unit='KiB' size='4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <pages unit='KiB' size='2048'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <pages unit='KiB' size='1048576'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <power_management>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <suspend_mem/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </power_management>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <iommu support='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <migration_features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <live/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <uri_transports>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <uri_transport>tcp</uri_transport>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <uri_transport>rdma</uri_transport>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </uri_transports>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </migration_features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <topology>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <cells num='1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <cell id='0'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:           <memory unit='KiB'>7864104</memory>
Oct 01 13:29:55 compute-0 nova_compute[260022]:           <pages unit='KiB' size='4'>1966026</pages>
Oct 01 13:29:55 compute-0 nova_compute[260022]:           <pages unit='KiB' size='2048'>0</pages>
Oct 01 13:29:55 compute-0 nova_compute[260022]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 01 13:29:55 compute-0 nova_compute[260022]:           <distances>
Oct 01 13:29:55 compute-0 nova_compute[260022]:             <sibling id='0' value='10'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:           </distances>
Oct 01 13:29:55 compute-0 nova_compute[260022]:           <cpus num='8'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:           </cpus>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         </cell>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </cells>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </topology>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <cache>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </cache>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <secmodel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model>selinux</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <doi>0</doi>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </secmodel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <secmodel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model>dac</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <doi>0</doi>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </secmodel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </host>
Oct 01 13:29:55 compute-0 nova_compute[260022]: 
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <guest>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <os_type>hvm</os_type>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <arch name='i686'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <wordsize>32</wordsize>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <domain type='qemu'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <domain type='kvm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </arch>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <pae/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <nonpae/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <acpi default='on' toggle='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <apic default='on' toggle='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <cpuselection/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <deviceboot/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <disksnapshot default='on' toggle='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <externalSnapshot/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </guest>
Oct 01 13:29:55 compute-0 nova_compute[260022]: 
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <guest>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <os_type>hvm</os_type>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <arch name='x86_64'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <wordsize>64</wordsize>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <domain type='qemu'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <domain type='kvm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </arch>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <acpi default='on' toggle='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <apic default='on' toggle='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <cpuselection/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <deviceboot/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <disksnapshot default='on' toggle='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <externalSnapshot/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </guest>
Oct 01 13:29:55 compute-0 nova_compute[260022]: 
Oct 01 13:29:55 compute-0 nova_compute[260022]: </capabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]: 
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.647 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.684 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 01 13:29:55 compute-0 nova_compute[260022]: <domainCapabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <path>/usr/libexec/qemu-kvm</path>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <domain>kvm</domain>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <arch>i686</arch>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <vcpu max='4096'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <iothreads supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <os supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <enum name='firmware'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <loader supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>rom</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pflash</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='readonly'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>yes</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>no</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='secure'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>no</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </loader>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </os>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='host-passthrough' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='hostPassthroughMigratable'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>on</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>off</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='maximum' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='maximumMigratable'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>on</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>off</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='host-model' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <vendor>AMD</vendor>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='x2apic'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc-deadline'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='hypervisor'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc_adjust'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='spec-ctrl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='stibp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='arch-capabilities'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='cmp_legacy'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='overflow-recov'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='succor'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='amd-ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='virt-ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='lbrv'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc-scale'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='vmcb-clean'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='flushbyasid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pause-filter'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pfthreshold'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='svme-addr-chk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='rdctl-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='mds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pschange-mc-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='gds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='rfds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='disable' name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='custom' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Dhyana-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Genoa'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='auto-ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Genoa-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='auto-ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-128'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-256'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-512'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v6'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v7'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='KnightsMill'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4fmaps'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4vnniw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512er'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512pf'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='KnightsMill-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4fmaps'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4vnniw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512er'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512pf'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G4-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tbm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G5-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tbm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SierraForest'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ne-convert'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cmpccxadd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SierraForest-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ne-convert'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cmpccxadd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='athlon'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='athlon-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='core2duo'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='core2duo-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='coreduo'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='coreduo-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='n270'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='n270-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='phenom'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='phenom-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <memoryBacking supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <enum name='sourceType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>file</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>anonymous</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>memfd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </memoryBacking>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <devices>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <disk supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='diskDevice'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>disk</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>cdrom</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>floppy</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>lun</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='bus'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>fdc</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>scsi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>sata</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-non-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </disk>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <graphics supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vnc</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>egl-headless</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>dbus</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </graphics>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <video supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='modelType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vga</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>cirrus</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>none</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>bochs</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ramfb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </video>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <hostdev supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='mode'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>subsystem</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='startupPolicy'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>default</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>mandatory</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>requisite</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>optional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='subsysType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pci</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>scsi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='capsType'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='pciBackend'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </hostdev>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <rng supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-non-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>random</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>egd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>builtin</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </rng>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <filesystem supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='driverType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>path</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>handle</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtiofs</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </filesystem>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <tpm supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tpm-tis</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tpm-crb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>emulator</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>external</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendVersion'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>2.0</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </tpm>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <redirdev supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='bus'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </redirdev>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <channel supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pty</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>unix</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </channel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <crypto supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>qemu</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>builtin</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </crypto>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <interface supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>default</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>passt</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </interface>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <panic supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>isa</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>hyperv</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </panic>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </devices>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <gic supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <vmcoreinfo supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <genid supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <backingStoreInput supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <backup supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <async-teardown supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <ps2 supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <sev supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <sgx supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <hyperv supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='features'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>relaxed</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vapic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>spinlocks</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vpindex</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>runtime</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>synic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>stimer</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>reset</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vendor_id</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>frequencies</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>reenlightenment</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tlbflush</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ipi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>avic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>emsr_bitmap</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>xmm_input</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </hyperv>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <launchSecurity supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </features>
Oct 01 13:29:55 compute-0 nova_compute[260022]: </domainCapabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.690 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 01 13:29:55 compute-0 nova_compute[260022]: <domainCapabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <path>/usr/libexec/qemu-kvm</path>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <domain>kvm</domain>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <arch>i686</arch>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <vcpu max='240'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <iothreads supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <os supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <enum name='firmware'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <loader supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>rom</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pflash</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='readonly'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>yes</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>no</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='secure'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>no</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </loader>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </os>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='host-passthrough' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='hostPassthroughMigratable'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>on</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>off</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='maximum' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='maximumMigratable'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>on</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>off</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='host-model' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <vendor>AMD</vendor>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='x2apic'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc-deadline'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='hypervisor'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc_adjust'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='spec-ctrl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='stibp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='arch-capabilities'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='cmp_legacy'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='overflow-recov'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='succor'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='amd-ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='virt-ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='lbrv'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc-scale'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='vmcb-clean'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='flushbyasid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pause-filter'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pfthreshold'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='svme-addr-chk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='rdctl-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='mds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pschange-mc-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='gds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='rfds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='disable' name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='custom' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Dhyana-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Genoa'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='auto-ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Genoa-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='auto-ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-128'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-256'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-512'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v6'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v7'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='KnightsMill'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4fmaps'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4vnniw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512er'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512pf'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='KnightsMill-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4fmaps'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4vnniw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512er'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512pf'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G4-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tbm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G5-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tbm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SierraForest'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ne-convert'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cmpccxadd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SierraForest-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ne-convert'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cmpccxadd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='athlon'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='athlon-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='core2duo'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='core2duo-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='coreduo'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='coreduo-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='n270'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='n270-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='phenom'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='phenom-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <memoryBacking supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <enum name='sourceType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>file</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>anonymous</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>memfd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </memoryBacking>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <devices>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <disk supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='diskDevice'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>disk</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>cdrom</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>floppy</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>lun</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='bus'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ide</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>fdc</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>scsi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>sata</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-non-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </disk>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <graphics supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vnc</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>egl-headless</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>dbus</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </graphics>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <video supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='modelType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vga</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>cirrus</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>none</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>bochs</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ramfb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </video>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <hostdev supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='mode'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>subsystem</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='startupPolicy'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>default</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>mandatory</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>requisite</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>optional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='subsysType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pci</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>scsi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='capsType'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='pciBackend'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </hostdev>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <rng supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-non-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>random</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>egd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>builtin</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </rng>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <filesystem supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='driverType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>path</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>handle</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtiofs</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </filesystem>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <tpm supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tpm-tis</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tpm-crb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>emulator</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>external</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendVersion'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>2.0</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </tpm>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <redirdev supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='bus'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </redirdev>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <channel supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pty</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>unix</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </channel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <crypto supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>qemu</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>builtin</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </crypto>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <interface supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>default</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>passt</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </interface>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <panic supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>isa</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>hyperv</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </panic>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </devices>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <gic supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <vmcoreinfo supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <genid supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <backingStoreInput supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <backup supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <async-teardown supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <ps2 supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <sev supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <sgx supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <hyperv supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='features'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>relaxed</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vapic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>spinlocks</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vpindex</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>runtime</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>synic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>stimer</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>reset</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vendor_id</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>frequencies</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>reenlightenment</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tlbflush</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ipi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>avic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>emsr_bitmap</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>xmm_input</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </hyperv>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <launchSecurity supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </features>
Oct 01 13:29:55 compute-0 nova_compute[260022]: </domainCapabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.717 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.723 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 01 13:29:55 compute-0 nova_compute[260022]: <domainCapabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <path>/usr/libexec/qemu-kvm</path>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <domain>kvm</domain>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <arch>x86_64</arch>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <vcpu max='4096'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <iothreads supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <os supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <enum name='firmware'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>efi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <loader supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>rom</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pflash</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='readonly'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>yes</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>no</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='secure'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>yes</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>no</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </loader>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </os>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='host-passthrough' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='hostPassthroughMigratable'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>on</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>off</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='maximum' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='maximumMigratable'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>on</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>off</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='host-model' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <vendor>AMD</vendor>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='x2apic'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc-deadline'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='hypervisor'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc_adjust'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='spec-ctrl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='stibp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='arch-capabilities'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='cmp_legacy'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='overflow-recov'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='succor'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='amd-ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='virt-ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='lbrv'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc-scale'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='vmcb-clean'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='flushbyasid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pause-filter'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pfthreshold'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='svme-addr-chk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='rdctl-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='mds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pschange-mc-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='gds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='rfds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='disable' name='xsaves'/>
Oct 01 13:29:55 compute-0 ceph-mon[74802]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='custom' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Dhyana-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Genoa'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='auto-ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Genoa-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='auto-ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-128'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-256'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-512'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v6'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v7'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='KnightsMill'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4fmaps'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4vnniw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512er'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512pf'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='KnightsMill-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4fmaps'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4vnniw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512er'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512pf'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G4-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tbm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G5-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tbm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SierraForest'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ne-convert'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cmpccxadd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SierraForest-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ne-convert'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cmpccxadd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='athlon'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='athlon-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='core2duo'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='core2duo-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='coreduo'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='coreduo-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='n270'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='n270-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='phenom'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='phenom-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <memoryBacking supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <enum name='sourceType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>file</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>anonymous</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>memfd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </memoryBacking>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <devices>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <disk supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='diskDevice'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>disk</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>cdrom</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>floppy</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>lun</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='bus'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>fdc</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>scsi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>sata</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-non-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </disk>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <graphics supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vnc</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>egl-headless</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>dbus</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </graphics>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <video supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='modelType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vga</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>cirrus</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>none</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>bochs</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ramfb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </video>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <hostdev supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='mode'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>subsystem</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='startupPolicy'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>default</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>mandatory</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>requisite</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>optional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='subsysType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pci</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>scsi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='capsType'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='pciBackend'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </hostdev>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <rng supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-non-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>random</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>egd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>builtin</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </rng>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <filesystem supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='driverType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>path</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>handle</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtiofs</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </filesystem>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <tpm supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tpm-tis</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tpm-crb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>emulator</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>external</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendVersion'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>2.0</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </tpm>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <redirdev supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='bus'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </redirdev>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <channel supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pty</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>unix</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </channel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <crypto supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>qemu</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>builtin</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </crypto>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <interface supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>default</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>passt</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </interface>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <panic supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>isa</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>hyperv</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </panic>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </devices>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <gic supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <vmcoreinfo supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <genid supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <backingStoreInput supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <backup supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <async-teardown supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <ps2 supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <sev supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <sgx supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <hyperv supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='features'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>relaxed</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vapic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>spinlocks</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vpindex</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>runtime</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>synic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>stimer</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>reset</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vendor_id</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>frequencies</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>reenlightenment</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tlbflush</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ipi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>avic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>emsr_bitmap</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>xmm_input</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </hyperv>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <launchSecurity supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </features>
Oct 01 13:29:55 compute-0 nova_compute[260022]: </domainCapabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.778 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 01 13:29:55 compute-0 nova_compute[260022]: <domainCapabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <path>/usr/libexec/qemu-kvm</path>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <domain>kvm</domain>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <arch>x86_64</arch>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <vcpu max='240'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <iothreads supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <os supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <enum name='firmware'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <loader supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>rom</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pflash</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='readonly'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>yes</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>no</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='secure'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>no</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </loader>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </os>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='host-passthrough' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='hostPassthroughMigratable'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>on</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>off</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='maximum' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='maximumMigratable'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>on</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>off</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='host-model' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <vendor>AMD</vendor>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='x2apic'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc-deadline'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='hypervisor'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc_adjust'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='spec-ctrl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='stibp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='arch-capabilities'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='cmp_legacy'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='overflow-recov'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='succor'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='amd-ssbd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='virt-ssbd'/>
Oct 01 13:29:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:29:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5750 writes, 24K keys, 5750 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5750 writes, 952 syncs, 6.04 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6550e31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='lbrv'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='tsc-scale'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='vmcb-clean'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='flushbyasid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pause-filter'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pfthreshold'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='svme-addr-chk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='rdctl-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='mds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='pschange-mc-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='gds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='require' name='rfds-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <feature policy='disable' name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <mode name='custom' supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Broadwell-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cascadelake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Cooperlake-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Denverton-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Dhyana-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Genoa'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='auto-ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Genoa-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='auto-ibrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Milan-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amd-psfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='no-nested-data-bp'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='null-sel-clr-base'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='stibp-always-on'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-Rome-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='EPYC-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='GraniteRapids-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-128'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-256'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx10-512'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='prefetchiti'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Haswell-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-noTSX'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v6'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Icelake-Server-v7'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='IvyBridge-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='KnightsMill'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4fmaps'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4vnniw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512er'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512pf'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='KnightsMill-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4fmaps'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-4vnniw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512er'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512pf'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G4-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tbm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Opteron_G5-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fma4'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tbm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xop'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SapphireRapids-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='amx-tile'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-bf16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-fp16'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512-vpopcntdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bitalg'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vbmi2'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrc'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fzrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='la57'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='taa-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='tsx-ldtrk'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xfd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SierraForest'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ne-convert'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cmpccxadd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='SierraForest-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ifma'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-ne-convert'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx-vnni-int8'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='bus-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cmpccxadd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fbsdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='fsrs'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ibrs-all'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mcdt-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pbrsb-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='psdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='sbdr-ssdp-no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='serialize'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vaes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='vpclmulqdq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Client-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='hle'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='rtm'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Skylake-Server-v5'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512bw'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512cd'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512dq'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512f'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='avx512vl'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='invpcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pcid'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='pku'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='mpx'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v2'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v3'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='core-capability'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='split-lock-detect'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='Snowridge-v4'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='cldemote'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='erms'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='gfni'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdir64b'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='movdiri'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='xsaves'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='athlon'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='athlon-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='core2duo'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='core2duo-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='coreduo'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='coreduo-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='n270'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='n270-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='ss'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='phenom'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <blockers model='phenom-v1'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnow'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <feature name='3dnowext'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </blockers>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </mode>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </cpu>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <memoryBacking supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <enum name='sourceType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>file</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>anonymous</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <value>memfd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </memoryBacking>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <devices>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <disk supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='diskDevice'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>disk</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>cdrom</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>floppy</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>lun</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='bus'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ide</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>fdc</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>scsi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>sata</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-non-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </disk>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <graphics supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vnc</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>egl-headless</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>dbus</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </graphics>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <video supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='modelType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vga</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>cirrus</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>none</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>bochs</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ramfb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </video>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <hostdev supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='mode'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>subsystem</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='startupPolicy'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>default</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>mandatory</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>requisite</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>optional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='subsysType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pci</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>scsi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='capsType'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='pciBackend'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </hostdev>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <rng supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtio-non-transitional</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>random</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>egd</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>builtin</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </rng>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <filesystem supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='driverType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>path</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>handle</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>virtiofs</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </filesystem>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <tpm supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tpm-tis</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tpm-crb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>emulator</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>external</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendVersion'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>2.0</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </tpm>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <redirdev supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='bus'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>usb</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </redirdev>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <channel supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>pty</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>unix</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </channel>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <crypto supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='type'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>qemu</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendModel'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>builtin</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </crypto>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <interface supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='backendType'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>default</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>passt</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </interface>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <panic supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='model'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>isa</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>hyperv</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </panic>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </devices>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   <features>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <gic supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <vmcoreinfo supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <genid supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <backingStoreInput supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <backup supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <async-teardown supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <ps2 supported='yes'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <sev supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <sgx supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <hyperv supported='yes'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       <enum name='features'>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>relaxed</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vapic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>spinlocks</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vpindex</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>runtime</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>synic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>stimer</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>reset</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>vendor_id</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>frequencies</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>reenlightenment</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>tlbflush</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>ipi</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>avic</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>emsr_bitmap</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:         <value>xmm_input</value>
Oct 01 13:29:55 compute-0 nova_compute[260022]:       </enum>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     </hyperv>
Oct 01 13:29:55 compute-0 nova_compute[260022]:     <launchSecurity supported='no'/>
Oct 01 13:29:55 compute-0 nova_compute[260022]:   </features>
Oct 01 13:29:55 compute-0 nova_compute[260022]: </domainCapabilities>
Oct 01 13:29:55 compute-0 nova_compute[260022]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.836 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.837 2 INFO nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Secure Boot support detected
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.839 2 INFO nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.839 2 INFO nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.848 2 DEBUG nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 01 13:29:55 compute-0 nova_compute[260022]: 2025-10-01 13:29:55.956 2 INFO nova.virt.node [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Determined node identity c1b9017d-7e6f-44ea-9ee2-bc19313d736f from /var/lib/nova/compute_id
Oct 01 13:29:56 compute-0 nova_compute[260022]: 2025-10-01 13:29:56.094 2 WARNING nova.compute.manager [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Compute nodes ['c1b9017d-7e6f-44ea-9ee2-bc19313d736f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 01 13:29:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:56 compute-0 nova_compute[260022]: 2025-10-01 13:29:56.544 2 INFO nova.compute.manager [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 01 13:29:56 compute-0 nova_compute[260022]: 2025-10-01 13:29:56.956 2 WARNING nova.compute.manager [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 01 13:29:56 compute-0 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:29:56 compute-0 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:29:56 compute-0 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:29:56 compute-0 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:29:56 compute-0 nova_compute[260022]: 2025-10-01 13:29:56.957 2 DEBUG oslo_concurrency.processutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:29:57 compute-0 ceph-mon[74802]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:29:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:29:57 compute-0 sshd-session[260366]: Invalid user admin1 from 80.253.31.232 port 58600
Oct 01 13:29:57 compute-0 sshd-session[260366]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:29:57 compute-0 sshd-session[260366]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:29:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:29:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814815196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:29:57 compute-0 nova_compute[260022]: 2025-10-01 13:29:57.411 2 DEBUG oslo_concurrency.processutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:29:57 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 01 13:29:57 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 01 13:29:57 compute-0 nova_compute[260022]: 2025-10-01 13:29:57.935 2 WARNING nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:29:57 compute-0 nova_compute[260022]: 2025-10-01 13:29:57.936 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5208MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:29:57 compute-0 nova_compute[260022]: 2025-10-01 13:29:57.936 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:29:57 compute-0 nova_compute[260022]: 2025-10-01 13:29:57.937 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:29:58 compute-0 nova_compute[260022]: 2025-10-01 13:29:58.047 2 WARNING nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] No compute node record for compute-0.ctlplane.example.com:c1b9017d-7e6f-44ea-9ee2-bc19313d736f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host c1b9017d-7e6f-44ea-9ee2-bc19313d736f could not be found.
Oct 01 13:29:58 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/814815196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:29:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:58 compute-0 nova_compute[260022]: 2025-10-01 13:29:58.291 2 INFO nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: c1b9017d-7e6f-44ea-9ee2-bc19313d736f
Oct 01 13:29:58 compute-0 sudo[260412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:29:58 compute-0 sudo[260412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:29:58 compute-0 sudo[260412]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:58 compute-0 podman[260437]: 2025-10-01 13:29:58.523620633 +0000 UTC m=+0.075795811 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct 01 13:29:58 compute-0 podman[260438]: 2025-10-01 13:29:58.524325325 +0000 UTC m=+0.074017574 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 13:29:58 compute-0 sudo[260461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:29:58 compute-0 sudo[260461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:29:58 compute-0 sudo[260461]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:58 compute-0 podman[260436]: 2025-10-01 13:29:58.560951519 +0000 UTC m=+0.114814592 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:29:58 compute-0 sudo[260521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:29:58 compute-0 sudo[260521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:29:58 compute-0 sudo[260521]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:58 compute-0 sshd-session[260366]: Failed password for invalid user admin1 from 80.253.31.232 port 58600 ssh2
Oct 01 13:29:58 compute-0 podman[260519]: 2025-10-01 13:29:58.652543593 +0000 UTC m=+0.083352933 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:29:58 compute-0 sudo[260565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:29:58 compute-0 sudo[260565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:29:58 compute-0 nova_compute[260022]: 2025-10-01 13:29:58.754 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:29:58 compute-0 nova_compute[260022]: 2025-10-01 13:29:58.755 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:29:59 compute-0 ceph-mon[74802]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:29:59 compute-0 sudo[260565]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:29:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:29:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:29:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:29:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:29:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:29:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fff1b8d0-0beb-43c4-830a-106289ae127e does not exist
Oct 01 13:29:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 20bc2ca1-9efb-4e9d-b8db-16b22bb3d8f7 does not exist
Oct 01 13:29:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7d763adf-2e49-42d2-b485-4a38e14b8721 does not exist
Oct 01 13:29:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:29:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:29:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:29:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:29:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:29:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:29:59 compute-0 sudo[260621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:29:59 compute-0 sudo[260621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:29:59 compute-0 sudo[260621]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:59 compute-0 sshd-session[260366]: Received disconnect from 80.253.31.232 port 58600:11: Bye Bye [preauth]
Oct 01 13:29:59 compute-0 sshd-session[260366]: Disconnected from invalid user admin1 80.253.31.232 port 58600 [preauth]
Oct 01 13:29:59 compute-0 sudo[260646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:29:59 compute-0 sudo[260646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:29:59 compute-0 sudo[260646]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:59 compute-0 sudo[260671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:29:59 compute-0 sudo[260671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:29:59 compute-0 sudo[260671]: pam_unix(sudo:session): session closed for user root
Oct 01 13:29:59 compute-0 sudo[260696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:29:59 compute-0 sudo[260696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:29:59 compute-0 nova_compute[260022]: 2025-10-01 13:29:59.681 2 INFO nova.scheduler.client.report [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] [req-b3b32f97-81e7-470b-8239-b0299d55b12e] Created resource provider record via placement API for resource provider with UUID c1b9017d-7e6f-44ea-9ee2-bc19313d736f and name compute-0.ctlplane.example.com.
Oct 01 13:29:59 compute-0 podman[260760]: 2025-10-01 13:29:59.810587777 +0000 UTC m=+0.043147463 container create 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:29:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:29:59 compute-0 systemd[1]: Started libpod-conmon-315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716.scope.
Oct 01 13:29:59 compute-0 podman[260760]: 2025-10-01 13:29:59.790407935 +0000 UTC m=+0.022967651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:29:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:29:59 compute-0 podman[260760]: 2025-10-01 13:29:59.954356749 +0000 UTC m=+0.186916525 container init 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:29:59 compute-0 podman[260760]: 2025-10-01 13:29:59.964892814 +0000 UTC m=+0.197452540 container start 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:29:59 compute-0 keen_haslett[260776]: 167 167
Oct 01 13:29:59 compute-0 systemd[1]: libpod-315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716.scope: Deactivated successfully.
Oct 01 13:29:59 compute-0 conmon[260776]: conmon 315e4987f51d9f005a80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716.scope/container/memory.events
Oct 01 13:30:00 compute-0 nova_compute[260022]: 2025-10-01 13:30:00.073 2 DEBUG oslo_concurrency.processutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:30:00 compute-0 podman[260760]: 2025-10-01 13:30:00.211589619 +0000 UTC m=+0.444149345 container attach 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:30:00 compute-0 podman[260760]: 2025-10-01 13:30:00.212999693 +0000 UTC m=+0.445559389 container died 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:30:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:30:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:30:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:30:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:30:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:30:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:30:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:30:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441398135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:30:00 compute-0 nova_compute[260022]: 2025-10-01 13:30:00.606 2 DEBUG oslo_concurrency.processutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:30:00 compute-0 nova_compute[260022]: 2025-10-01 13:30:00.613 2 DEBUG nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 01 13:30:00 compute-0 nova_compute[260022]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 01 13:30:00 compute-0 nova_compute[260022]: 2025-10-01 13:30:00.614 2 INFO nova.virt.libvirt.host [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] kernel doesn't support AMD SEV
Oct 01 13:30:00 compute-0 nova_compute[260022]: 2025-10-01 13:30:00.615 2 DEBUG nova.compute.provider_tree [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 13:30:00 compute-0 nova_compute[260022]: 2025-10-01 13:30:00.615 2 DEBUG nova.virt.libvirt.driver [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 01 13:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecb0c2544377ea915647b48157c7aefaf4c1f7ad9664a98caddfdd2d0a779015-merged.mount: Deactivated successfully.
Oct 01 13:30:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:30:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6974 writes, 28K keys, 6974 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6974 writes, 1320 syncs, 5.28 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f3dbe0d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 13:30:00 compute-0 nova_compute[260022]: 2025-10-01 13:30:00.861 2 DEBUG nova.scheduler.client.report [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updated inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 01 13:30:00 compute-0 nova_compute[260022]: 2025-10-01 13:30:00.862 2 DEBUG nova.compute.provider_tree [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updating resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 01 13:30:00 compute-0 nova_compute[260022]: 2025-10-01 13:30:00.862 2 DEBUG nova.compute.provider_tree [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 13:30:01 compute-0 podman[260760]: 2025-10-01 13:30:01.149333339 +0000 UTC m=+1.381893065 container remove 315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haslett, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:30:01 compute-0 systemd[1]: libpod-conmon-315e4987f51d9f005a804c20b11f204615edbfe42af2766a52d47b8986cbe716.scope: Deactivated successfully.
Oct 01 13:30:01 compute-0 nova_compute[260022]: 2025-10-01 13:30:01.177 2 DEBUG nova.compute.provider_tree [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Updating resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 01 13:30:01 compute-0 nova_compute[260022]: 2025-10-01 13:30:01.243 2 DEBUG nova.compute.resource_tracker [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:30:01 compute-0 nova_compute[260022]: 2025-10-01 13:30:01.251 2 DEBUG oslo_concurrency.lockutils [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:30:01 compute-0 nova_compute[260022]: 2025-10-01 13:30:01.251 2 DEBUG nova.service [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 01 13:30:01 compute-0 podman[260823]: 2025-10-01 13:30:01.36823378 +0000 UTC m=+0.024432849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:30:01 compute-0 nova_compute[260022]: 2025-10-01 13:30:01.726 2 DEBUG nova.service [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 01 13:30:01 compute-0 nova_compute[260022]: 2025-10-01 13:30:01.727 2 DEBUG nova.servicegroup.drivers.db [None req-26a6fe1d-c0dc-484f-b511-7a57f237dea9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 01 13:30:01 compute-0 ceph-mon[74802]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1441398135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:30:01 compute-0 podman[260823]: 2025-10-01 13:30:01.841085516 +0000 UTC m=+0.497284615 container create 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:30:02 compute-0 systemd[1]: Started libpod-conmon-3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd.scope.
Oct 01 13:30:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:02 compute-0 podman[260823]: 2025-10-01 13:30:02.373976631 +0000 UTC m=+1.030175760 container init 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:30:02 compute-0 podman[260823]: 2025-10-01 13:30:02.387535973 +0000 UTC m=+1.043735022 container start 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 13:30:02 compute-0 podman[260823]: 2025-10-01 13:30:02.585302102 +0000 UTC m=+1.241501251 container attach 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:30:03 compute-0 ceph-mon[74802]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:03 compute-0 objective_wright[260839]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:30:03 compute-0 objective_wright[260839]: --> relative data size: 1.0
Oct 01 13:30:03 compute-0 objective_wright[260839]: --> All data devices are unavailable
Oct 01 13:30:03 compute-0 systemd[1]: libpod-3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd.scope: Deactivated successfully.
Oct 01 13:30:03 compute-0 systemd[1]: libpod-3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd.scope: Consumed 1.233s CPU time.
Oct 01 13:30:03 compute-0 podman[260823]: 2025-10-01 13:30:03.71608561 +0000 UTC m=+2.372284689 container died 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 13:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc1fe6a417f146b23c64e0edd1209f867898e1d578e336a3922bb90d6cbcd703-merged.mount: Deactivated successfully.
Oct 01 13:30:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:04 compute-0 podman[260823]: 2025-10-01 13:30:04.395443522 +0000 UTC m=+3.051642611 container remove 3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:30:04 compute-0 systemd[1]: libpod-conmon-3687a512f422d44b7f69acca7b19791cf23e9a0c1009966e3a68b43d0bacefbd.scope: Deactivated successfully.
Oct 01 13:30:04 compute-0 sudo[260696]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:04 compute-0 sudo[260882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:30:04 compute-0 sudo[260882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:04 compute-0 sudo[260882]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:04 compute-0 sudo[260907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:30:04 compute-0 sudo[260907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:04 compute-0 sudo[260907]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:04 compute-0 sudo[260932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:30:04 compute-0 sudo[260932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:04 compute-0 sudo[260932]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:04 compute-0 sudo[260957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:30:04 compute-0 sudo[260957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:05 compute-0 podman[261024]: 2025-10-01 13:30:05.214322532 +0000 UTC m=+0.025293815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:30:05 compute-0 podman[261024]: 2025-10-01 13:30:05.473228615 +0000 UTC m=+0.284199928 container create 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:30:05 compute-0 ceph-mon[74802]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:05 compute-0 systemd[1]: Started libpod-conmon-97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411.scope.
Oct 01 13:30:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:30:05 compute-0 podman[261024]: 2025-10-01 13:30:05.803343413 +0000 UTC m=+0.614314756 container init 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 13:30:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:30:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5635 writes, 24K keys, 5635 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5635 writes, 875 syncs, 6.44 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb87090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb87090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb87090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b1adb871f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 01 13:30:05 compute-0 podman[261024]: 2025-10-01 13:30:05.816394108 +0000 UTC m=+0.627365371 container start 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:30:05 compute-0 pensive_jepsen[261040]: 167 167
Oct 01 13:30:05 compute-0 systemd[1]: libpod-97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411.scope: Deactivated successfully.
Oct 01 13:30:06 compute-0 podman[261024]: 2025-10-01 13:30:06.078691089 +0000 UTC m=+0.889662452 container attach 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:30:06 compute-0 podman[261024]: 2025-10-01 13:30:06.079765123 +0000 UTC m=+0.890736426 container died 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bb6fc4c28f3292c5c5e5de99fa96329822a5e3f3162a76c29ae4989b83c144c-merged.mount: Deactivated successfully.
Oct 01 13:30:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:06 compute-0 podman[261024]: 2025-10-01 13:30:06.630425783 +0000 UTC m=+1.441397076 container remove 97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_jepsen, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:30:06 compute-0 systemd[1]: libpod-conmon-97350cde3fae66c0af95a67f8d55314d6b5cd2d903c2c8ba21c778d6718ba411.scope: Deactivated successfully.
Oct 01 13:30:06 compute-0 podman[261067]: 2025-10-01 13:30:06.828316066 +0000 UTC m=+0.035795189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:30:07 compute-0 podman[261067]: 2025-10-01 13:30:07.154978104 +0000 UTC m=+0.362457267 container create 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:30:07 compute-0 systemd[1]: Started libpod-conmon-8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006.scope.
Oct 01 13:30:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:30:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:07 compute-0 podman[261067]: 2025-10-01 13:30:07.462056809 +0000 UTC m=+0.669536012 container init 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:30:07 compute-0 podman[261067]: 2025-10-01 13:30:07.469615529 +0000 UTC m=+0.677094642 container start 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 01 13:30:07 compute-0 podman[261067]: 2025-10-01 13:30:07.542667292 +0000 UTC m=+0.750146445 container attach 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:30:07 compute-0 ceph-mon[74802]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:07 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct 01 13:30:08 compute-0 gallant_yalow[261083]: {
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:     "0": [
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:         {
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "devices": [
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "/dev/loop3"
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             ],
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_name": "ceph_lv0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_size": "21470642176",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "name": "ceph_lv0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "tags": {
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.cluster_name": "ceph",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.crush_device_class": "",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.encrypted": "0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.osd_id": "0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.type": "block",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.vdo": "0"
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             },
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "type": "block",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "vg_name": "ceph_vg0"
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:         }
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:     ],
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:     "1": [
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:         {
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "devices": [
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "/dev/loop4"
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             ],
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_name": "ceph_lv1",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_size": "21470642176",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "name": "ceph_lv1",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "tags": {
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.cluster_name": "ceph",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.crush_device_class": "",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.encrypted": "0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.osd_id": "1",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.type": "block",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.vdo": "0"
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             },
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "type": "block",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "vg_name": "ceph_vg1"
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:         }
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:     ],
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:     "2": [
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:         {
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "devices": [
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "/dev/loop5"
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             ],
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_name": "ceph_lv2",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_size": "21470642176",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "name": "ceph_lv2",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "tags": {
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.cluster_name": "ceph",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.crush_device_class": "",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.encrypted": "0",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.osd_id": "2",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.type": "block",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:                 "ceph.vdo": "0"
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             },
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "type": "block",
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:             "vg_name": "ceph_vg2"
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:         }
Oct 01 13:30:08 compute-0 gallant_yalow[261083]:     ]
Oct 01 13:30:08 compute-0 gallant_yalow[261083]: }
Oct 01 13:30:08 compute-0 systemd[1]: libpod-8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006.scope: Deactivated successfully.
Oct 01 13:30:08 compute-0 podman[261067]: 2025-10-01 13:30:08.27121245 +0000 UTC m=+1.478691623 container died 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:30:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2af83e7c460ebf2df9dab9b259ed08c578d8437f4eaaeeac53ac6ae8230da6c-merged.mount: Deactivated successfully.
Oct 01 13:30:09 compute-0 podman[261067]: 2025-10-01 13:30:09.332545779 +0000 UTC m=+2.540024932 container remove 8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_yalow, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:30:09 compute-0 systemd[1]: libpod-conmon-8ba6a4f1b83e16be0049394d40000471e271498a16a7486c07c0d5fc8aa0c006.scope: Deactivated successfully.
Oct 01 13:30:09 compute-0 sudo[260957]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:09 compute-0 sudo[261106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:30:09 compute-0 sudo[261106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:09 compute-0 sudo[261106]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:09 compute-0 sudo[261131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:30:09 compute-0 sudo[261131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:09 compute-0 sudo[261131]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:09 compute-0 sudo[261156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:30:09 compute-0 sudo[261156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:09 compute-0 sudo[261156]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:09 compute-0 sudo[261181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:30:09 compute-0 sudo[261181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:09 compute-0 ceph-mon[74802]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:10 compute-0 podman[261245]: 2025-10-01 13:30:10.27003733 +0000 UTC m=+0.127936339 container create 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:30:10 compute-0 podman[261245]: 2025-10-01 13:30:10.18008935 +0000 UTC m=+0.037988399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:30:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:10 compute-0 systemd[1]: Started libpod-conmon-7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b.scope.
Oct 01 13:30:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:30:10 compute-0 podman[261245]: 2025-10-01 13:30:10.585003196 +0000 UTC m=+0.442902185 container init 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 13:30:10 compute-0 podman[261245]: 2025-10-01 13:30:10.597794643 +0000 UTC m=+0.455693652 container start 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:30:10 compute-0 elegant_shockley[261262]: 167 167
Oct 01 13:30:10 compute-0 systemd[1]: libpod-7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b.scope: Deactivated successfully.
Oct 01 13:30:10 compute-0 podman[261245]: 2025-10-01 13:30:10.709912118 +0000 UTC m=+0.567811107 container attach 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:30:10 compute-0 podman[261245]: 2025-10-01 13:30:10.710616421 +0000 UTC m=+0.568515390 container died 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c3a2e52cc804523dc2aea6bb4c0fedb5d023bd632b718ea15a7655407f2d0c0-merged.mount: Deactivated successfully.
Oct 01 13:30:11 compute-0 ceph-mon[74802]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:12 compute-0 podman[261245]: 2025-10-01 13:30:12.018313253 +0000 UTC m=+1.876212222 container remove 7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 13:30:12 compute-0 systemd[1]: libpod-conmon-7b2fef4a4889e92dcda97d2695dc9a4ba87d967540155d3f71c1997544166c4b.scope: Deactivated successfully.
Oct 01 13:30:12 compute-0 podman[261286]: 2025-10-01 13:30:12.252802702 +0000 UTC m=+0.108558484 container create f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:30:12 compute-0 podman[261286]: 2025-10-01 13:30:12.166419945 +0000 UTC m=+0.022175697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:30:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:30:12.297 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:30:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:30:12.298 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:30:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:30:12.299 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:30:12 compute-0 systemd[1]: Started libpod-conmon-f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074.scope.
Oct 01 13:30:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:30:12 compute-0 podman[261286]: 2025-10-01 13:30:12.442878886 +0000 UTC m=+0.298634688 container init f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:30:12 compute-0 podman[261286]: 2025-10-01 13:30:12.455414364 +0000 UTC m=+0.311170136 container start f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:30:12 compute-0 podman[261286]: 2025-10-01 13:30:12.686447991 +0000 UTC m=+0.542203823 container attach f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]: {
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "osd_id": 0,
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "type": "bluestore"
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:     },
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "osd_id": 2,
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "type": "bluestore"
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:     },
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "osd_id": 1,
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:         "type": "bluestore"
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]:     }
Oct 01 13:30:13 compute-0 peaceful_thompson[261302]: }
Oct 01 13:30:13 compute-0 systemd[1]: libpod-f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074.scope: Deactivated successfully.
Oct 01 13:30:13 compute-0 systemd[1]: libpod-f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074.scope: Consumed 1.164s CPU time.
Oct 01 13:30:13 compute-0 podman[261286]: 2025-10-01 13:30:13.638521696 +0000 UTC m=+1.494277438 container died f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:30:14 compute-0 ceph-mon[74802]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-597b19863ea2380b7a473cb6ab180f4900fe185928b844b819e4b5b9c39c255e-merged.mount: Deactivated successfully.
Oct 01 13:30:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:14 compute-0 podman[261286]: 2025-10-01 13:30:14.445057663 +0000 UTC m=+2.300813435 container remove f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:30:14 compute-0 systemd[1]: libpod-conmon-f5bb46ed7d84c78f493c7b06c27c6c15adae50943a08df27372e6ff5fbea1074.scope: Deactivated successfully.
Oct 01 13:30:14 compute-0 sudo[261181]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:30:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:30:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:30:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:30:14 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 665e89a6-ec32-47e1-86f9-163e304ad0bd does not exist
Oct 01 13:30:14 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c2f42cfa-918d-46e1-a090-e7173f65b02c does not exist
Oct 01 13:30:14 compute-0 sudo[261349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:30:14 compute-0 sudo[261349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:14 compute-0 sudo[261349]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:14 compute-0 sudo[261374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:30:14 compute-0 sudo[261374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:30:14 compute-0 sudo[261374]: pam_unix(sudo:session): session closed for user root
Oct 01 13:30:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:15 compute-0 ceph-mon[74802]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:30:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:30:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:17 compute-0 ceph-mon[74802]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:30:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:30:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:30:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:30:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:30:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:30:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:19 compute-0 ceph-mon[74802]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:30:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2679179799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:30:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:30:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2679179799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:30:21 compute-0 ceph-mon[74802]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:30:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1290104316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:30:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:30:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1290104316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:30:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2679179799' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:30:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2679179799' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:30:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1290104316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:30:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1290104316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:30:24 compute-0 ceph-mon[74802]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:25 compute-0 ceph-mon[74802]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:27 compute-0 ceph-mon[74802]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:30:29 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1677239573' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:30:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:30:29 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1677239573' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:30:29 compute-0 ceph-mon[74802]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:29 compute-0 podman[261402]: 2025-10-01 13:30:29.561258444 +0000 UTC m=+0.087771811 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 13:30:29 compute-0 podman[261400]: 2025-10-01 13:30:29.57277216 +0000 UTC m=+0.099871586 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:30:29 compute-0 podman[261401]: 2025-10-01 13:30:29.574823025 +0000 UTC m=+0.098442351 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct 01 13:30:29 compute-0 podman[261399]: 2025-10-01 13:30:29.618809055 +0000 UTC m=+0.146697667 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 01 13:30:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:30 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1677239573' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:30:30 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1677239573' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:30:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:31 compute-0 ceph-mon[74802]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:34 compute-0 ceph-mon[74802]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:35 compute-0 ceph-mon[74802]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:37 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct 01 13:30:37 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:37.643130) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:30:37 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct 01 13:30:37 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325437643176, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1436, "num_deletes": 251, "total_data_size": 2270403, "memory_usage": 2301576, "flush_reason": "Manual Compaction"}
Oct 01 13:30:37 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct 01 13:30:37 compute-0 ceph-mon[74802]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325438059105, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2238122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14991, "largest_seqno": 16426, "table_properties": {"data_size": 2231409, "index_size": 3848, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13793, "raw_average_key_size": 19, "raw_value_size": 2217956, "raw_average_value_size": 3163, "num_data_blocks": 176, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325278, "oldest_key_time": 1759325278, "file_creation_time": 1759325437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 416079 microseconds, and 9899 cpu microseconds.
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.059201) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2238122 bytes OK
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.059231) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.218864) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.218933) EVENT_LOG_v1 {"time_micros": 1759325438218916, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.218968) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2264060, prev total WAL file size 2264060, number of live WAL files 2.
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.220639) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2185KB)], [35(7275KB)]
Oct 01 13:30:38 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325438220759, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9688056, "oldest_snapshot_seqno": -1}
Oct 01 13:30:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4035 keys, 7910330 bytes, temperature: kUnknown
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325439110629, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7910330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7880747, "index_size": 18401, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98539, "raw_average_key_size": 24, "raw_value_size": 7805156, "raw_average_value_size": 1934, "num_data_blocks": 778, "num_entries": 4035, "num_filter_entries": 4035, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.111142) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7910330 bytes
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.335696) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 10.9 rd, 8.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 4549, records dropped: 514 output_compression: NoCompression
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.335790) EVENT_LOG_v1 {"time_micros": 1759325439335768, "job": 16, "event": "compaction_finished", "compaction_time_micros": 889981, "compaction_time_cpu_micros": 23331, "output_level": 6, "num_output_files": 1, "total_output_size": 7910330, "num_input_records": 4549, "num_output_records": 4035, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325439336713, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325439339678, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:38.220429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:30:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:30:39.339846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:30:39 compute-0 ceph-mon[74802]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:41 compute-0 ceph-mon[74802]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:44 compute-0 ceph-mon[74802]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:44 compute-0 unix_chkpwd[261484]: password check failed for user (root)
Oct 01 13:30:44 compute-0 sshd-session[261482]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46  user=root
Oct 01 13:30:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:45 compute-0 ceph-mon[74802]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:46 compute-0 sshd-session[261482]: Failed password for root from 156.236.31.46 port 45386 ssh2
Oct 01 13:30:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:30:47
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms']
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:30:47 compute-0 ceph-mon[74802]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:30:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:30:48 compute-0 sshd-session[261482]: Received disconnect from 156.236.31.46 port 45386:11: Bye Bye [preauth]
Oct 01 13:30:48 compute-0 sshd-session[261482]: Disconnected from authenticating user root 156.236.31.46 port 45386 [preauth]
Oct 01 13:30:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:49 compute-0 ceph-mon[74802]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:49 compute-0 nova_compute[260022]: 2025-10-01 13:30:49.729 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:49 compute-0 nova_compute[260022]: 2025-10-01 13:30:49.756 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:51 compute-0 ceph-mon[74802]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.405 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.405 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.406 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.407 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.407 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.407 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.408 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.408 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.409 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.544 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.546 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.547 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.547 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:30:53 compute-0 nova_compute[260022]: 2025-10-01 13:30:53.548 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:30:53 compute-0 sshd-session[261485]: Invalid user kbe from 27.254.137.144 port 44458
Oct 01 13:30:53 compute-0 sshd-session[261485]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:30:53 compute-0 sshd-session[261485]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:30:53 compute-0 unix_chkpwd[261509]: password check failed for user (root)
Oct 01 13:30:53 compute-0 sshd-session[261487]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139  user=root
Oct 01 13:30:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:30:54 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384273131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:30:54 compute-0 nova_compute[260022]: 2025-10-01 13:30:54.152 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:30:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:54 compute-0 nova_compute[260022]: 2025-10-01 13:30:54.371 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:30:54 compute-0 nova_compute[260022]: 2025-10-01 13:30:54.372 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5194MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:30:54 compute-0 nova_compute[260022]: 2025-10-01 13:30:54.373 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:30:54 compute-0 nova_compute[260022]: 2025-10-01 13:30:54.373 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:30:54 compute-0 ceph-mon[74802]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:54 compute-0 nova_compute[260022]: 2025-10-01 13:30:54.787 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:30:54 compute-0 nova_compute[260022]: 2025-10-01 13:30:54.788 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:30:54 compute-0 nova_compute[260022]: 2025-10-01 13:30:54.805 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:30:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:30:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:30:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380029336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:30:55 compute-0 nova_compute[260022]: 2025-10-01 13:30:55.275 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:30:55 compute-0 nova_compute[260022]: 2025-10-01 13:30:55.281 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:30:55 compute-0 nova_compute[260022]: 2025-10-01 13:30:55.318 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:30:55 compute-0 nova_compute[260022]: 2025-10-01 13:30:55.319 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:30:55 compute-0 nova_compute[260022]: 2025-10-01 13:30:55.320 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:30:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/384273131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:30:55 compute-0 ceph-mon[74802]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/380029336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:30:55 compute-0 sshd-session[261485]: Failed password for invalid user kbe from 27.254.137.144 port 44458 ssh2
Oct 01 13:30:55 compute-0 sshd-session[261487]: Failed password for root from 200.7.101.139 port 41538 ssh2
Oct 01 13:30:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:56 compute-0 sshd-session[261485]: Received disconnect from 27.254.137.144 port 44458:11: Bye Bye [preauth]
Oct 01 13:30:56 compute-0 sshd-session[261485]: Disconnected from invalid user kbe 27.254.137.144 port 44458 [preauth]
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:30:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:30:57 compute-0 ceph-mon[74802]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:57 compute-0 sshd-session[261487]: Received disconnect from 200.7.101.139 port 41538:11: Bye Bye [preauth]
Oct 01 13:30:57 compute-0 sshd-session[261487]: Disconnected from authenticating user root 200.7.101.139 port 41538 [preauth]
Oct 01 13:30:57 compute-0 unix_chkpwd[261536]: password check failed for user (root)
Oct 01 13:30:57 compute-0 sshd-session[261534]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232  user=root
Oct 01 13:30:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:59 compute-0 ceph-mon[74802]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:30:59 compute-0 sshd-session[261534]: Failed password for root from 80.253.31.232 port 43224 ssh2
Oct 01 13:30:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:31:00 compute-0 podman[261545]: 2025-10-01 13:31:00.555389088 +0000 UTC m=+0.088819735 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 01 13:31:00 compute-0 podman[261538]: 2025-10-01 13:31:00.559442338 +0000 UTC m=+0.098811954 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd)
Oct 01 13:31:00 compute-0 podman[261539]: 2025-10-01 13:31:00.5680133 +0000 UTC m=+0.105987332 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923)
Oct 01 13:31:00 compute-0 podman[261537]: 2025-10-01 13:31:00.59916423 +0000 UTC m=+0.150349602 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:31:01 compute-0 ceph-mon[74802]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:31:01 compute-0 sshd-session[261534]: Received disconnect from 80.253.31.232 port 43224:11: Bye Bye [preauth]
Oct 01 13:31:01 compute-0 sshd-session[261534]: Disconnected from authenticating user root 80.253.31.232 port 43224 [preauth]
Oct 01 13:31:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:31:03 compute-0 ceph-mon[74802]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:31:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:31:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:05 compute-0 ceph-mon[74802]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:31:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:31:07 compute-0 ceph-mon[74802]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:31:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:10 compute-0 ceph-mon[74802]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:12 compute-0 ceph-mon[74802]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:31:12.298 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:31:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:31:12.299 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:31:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:31:12.299 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:31:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 8 op/s
Oct 01 13:31:13 compute-0 ceph-mon[74802]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 8 op/s
Oct 01 13:31:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Oct 01 13:31:14 compute-0 sudo[261614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:14 compute-0 sudo[261614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:14 compute-0 sudo[261614]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:15 compute-0 sudo[261639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:31:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:15 compute-0 sudo[261639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:15 compute-0 sudo[261639]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:15 compute-0 sudo[261664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:15 compute-0 sudo[261664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:15 compute-0 sudo[261664]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:15 compute-0 sudo[261689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 13:31:15 compute-0 sudo[261689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:15 compute-0 sudo[261689]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:31:15 compute-0 ceph-mon[74802]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Oct 01 13:31:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Oct 01 13:31:16 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:31:16 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:16 compute-0 sudo[261734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:16 compute-0 sudo[261734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:16 compute-0 sudo[261734]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:16 compute-0 sudo[261759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:31:16 compute-0 sudo[261759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:16 compute-0 sudo[261759]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:17 compute-0 sudo[261784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:17 compute-0 sudo[261784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:17 compute-0 sudo[261784]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:17 compute-0 sudo[261809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:31:17 compute-0 sudo[261809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:17 compute-0 sudo[261809]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 13:31:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:31:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:31:17 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:31:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:31:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:31:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:31:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:31:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:31:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:31:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:31:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:31:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:31:17 compute-0 ceph-mon[74802]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Oct 01 13:31:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:17 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 84578743-07f2-47fb-a264-393922825af1 does not exist
Oct 01 13:31:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f8ac1b6e-7373-40ef-8b31-afeda72be9aa does not exist
Oct 01 13:31:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 21840c49-bc5e-4f85-b761-42d3e5341305 does not exist
Oct 01 13:31:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:31:18 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:31:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:31:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:31:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:31:18 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:31:18 compute-0 sudo[261865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:18 compute-0 sudo[261865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:18 compute-0 sudo[261865]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Oct 01 13:31:18 compute-0 sudo[261890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:31:18 compute-0 sudo[261890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:18 compute-0 sudo[261890]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:18 compute-0 sudo[261915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:18 compute-0 sudo[261915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:18 compute-0 sudo[261915]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:18 compute-0 sudo[261940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:31:18 compute-0 sudo[261940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:18 compute-0 podman[262006]: 2025-10-01 13:31:18.899026868 +0000 UTC m=+0.026956860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:31:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Oct 01 13:31:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:31:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:31:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:31:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:31:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:31:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:31:20 compute-0 podman[262006]: 2025-10-01 13:31:20.543241303 +0000 UTC m=+1.671171285 container create 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:31:21 compute-0 systemd[1]: Started libpod-conmon-42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7.scope.
Oct 01 13:31:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:31:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 13 op/s
Oct 01 13:31:22 compute-0 podman[262006]: 2025-10-01 13:31:22.48761986 +0000 UTC m=+3.615549912 container init 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 13:31:22 compute-0 podman[262006]: 2025-10-01 13:31:22.500262742 +0000 UTC m=+3.628192724 container start 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:31:22 compute-0 affectionate_kare[262022]: 167 167
Oct 01 13:31:22 compute-0 systemd[1]: libpod-42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7.scope: Deactivated successfully.
Oct 01 13:31:23 compute-0 ceph-mon[74802]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Oct 01 13:31:23 compute-0 ceph-mon[74802]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Oct 01 13:31:23 compute-0 podman[262006]: 2025-10-01 13:31:23.821092926 +0000 UTC m=+4.949022918 container attach 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:31:23 compute-0 podman[262006]: 2025-10-01 13:31:23.822543912 +0000 UTC m=+4.950473944 container died 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:31:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Oct 01 13:31:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-01b6f26c5f2bafb2299ce890fcb245dd6903b67da8c7d767582823a35c987613-merged.mount: Deactivated successfully.
Oct 01 13:31:25 compute-0 ceph-mon[74802]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 13 op/s
Oct 01 13:31:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:26 compute-0 podman[262006]: 2025-10-01 13:31:26.881019996 +0000 UTC m=+8.008949978 container remove 42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:31:26 compute-0 systemd[1]: libpod-conmon-42138e447285c86bbce9520fa5509f385488bb4ab879a2c60379881f26addca7.scope: Deactivated successfully.
Oct 01 13:31:27 compute-0 podman[262045]: 2025-10-01 13:31:27.079567725 +0000 UTC m=+0.024495580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:31:27 compute-0 ceph-mon[74802]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Oct 01 13:31:27 compute-0 podman[262045]: 2025-10-01 13:31:27.253392447 +0000 UTC m=+0.198320302 container create f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:31:27 compute-0 systemd[1]: Started libpod-conmon-f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f.scope.
Oct 01 13:31:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:27 compute-0 podman[262045]: 2025-10-01 13:31:27.96556984 +0000 UTC m=+0.910497795 container init f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:31:27 compute-0 podman[262045]: 2025-10-01 13:31:27.977648315 +0000 UTC m=+0.922576210 container start f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:31:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Oct 01 13:31:28 compute-0 podman[262045]: 2025-10-01 13:31:28.75551852 +0000 UTC m=+1.700446465 container attach f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 13:31:29 compute-0 ceph-mon[74802]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:30 compute-0 cool_chaum[262062]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:31:30 compute-0 cool_chaum[262062]: --> relative data size: 1.0
Oct 01 13:31:30 compute-0 cool_chaum[262062]: --> All data devices are unavailable
Oct 01 13:31:30 compute-0 systemd[1]: libpod-f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f.scope: Deactivated successfully.
Oct 01 13:31:30 compute-0 systemd[1]: libpod-f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f.scope: Consumed 1.356s CPU time.
Oct 01 13:31:30 compute-0 podman[262045]: 2025-10-01 13:31:30.100174452 +0000 UTC m=+3.045102337 container died f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:31:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:31 compute-0 ceph-mon[74802]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Oct 01 13:31:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e6b69a0960c808f7107ddea246849b74456c8cb4ec918f2fda19c58130c11e7-merged.mount: Deactivated successfully.
Oct 01 13:31:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:32 compute-0 podman[262045]: 2025-10-01 13:31:32.506083097 +0000 UTC m=+5.451010992 container remove f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chaum, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:31:32 compute-0 systemd[1]: libpod-conmon-f00b063c47bc4b978467de10be18cda37f8f7d0f354b4c11a83b4cba1cc2a03f.scope: Deactivated successfully.
Oct 01 13:31:32 compute-0 sudo[261940]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:32 compute-0 podman[262105]: 2025-10-01 13:31:32.606989368 +0000 UTC m=+1.651324842 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 13:31:32 compute-0 podman[262106]: 2025-10-01 13:31:32.626879771 +0000 UTC m=+1.669358525 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 13:31:32 compute-0 podman[262107]: 2025-10-01 13:31:32.630505387 +0000 UTC m=+1.666060891 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 01 13:31:32 compute-0 podman[262104]: 2025-10-01 13:31:32.647922802 +0000 UTC m=+1.691715758 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:31:32 compute-0 sudo[262170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:32 compute-0 sudo[262170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:32 compute-0 sudo[262170]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:32 compute-0 sudo[262208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:31:32 compute-0 sudo[262208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:32 compute-0 sudo[262208]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:32 compute-0 sudo[262233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:32 compute-0 sudo[262233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:32 compute-0 sudo[262233]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:32 compute-0 ceph-mon[74802]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:32 compute-0 sudo[262258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:31:32 compute-0 sudo[262258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:33 compute-0 podman[262322]: 2025-10-01 13:31:33.421266733 +0000 UTC m=+0.031204305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:31:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Oct 01 13:31:34 compute-0 podman[262322]: 2025-10-01 13:31:34.637695193 +0000 UTC m=+1.247632785 container create b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:31:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:35 compute-0 systemd[1]: Started libpod-conmon-b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5.scope.
Oct 01 13:31:35 compute-0 ceph-mon[74802]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:35 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:31:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:36 compute-0 podman[262322]: 2025-10-01 13:31:36.444320867 +0000 UTC m=+3.054258509 container init b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:31:36 compute-0 podman[262322]: 2025-10-01 13:31:36.456296217 +0000 UTC m=+3.066233819 container start b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:31:36 compute-0 gifted_morse[262338]: 167 167
Oct 01 13:31:36 compute-0 systemd[1]: libpod-b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5.scope: Deactivated successfully.
Oct 01 13:31:36 compute-0 podman[262322]: 2025-10-01 13:31:36.868889197 +0000 UTC m=+3.478826789 container attach b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:31:36 compute-0 podman[262322]: 2025-10-01 13:31:36.870710335 +0000 UTC m=+3.480647937 container died b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:31:37 compute-0 ceph-mon[74802]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Oct 01 13:31:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1c43556610d320f121b67dd01e14e6871e43dfdc6ed0a7876365b30271aee47-merged.mount: Deactivated successfully.
Oct 01 13:31:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:38 compute-0 ceph-mon[74802]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:39 compute-0 podman[262322]: 2025-10-01 13:31:39.019777498 +0000 UTC m=+5.629715100 container remove b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:31:39 compute-0 systemd[1]: libpod-conmon-b5a5b68439ff84e03ceb38322e83571a4e3570b2de3880ffdb1470d35f46f1d5.scope: Deactivated successfully.
Oct 01 13:31:39 compute-0 podman[262362]: 2025-10-01 13:31:39.290233365 +0000 UTC m=+0.053971969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:31:39 compute-0 podman[262362]: 2025-10-01 13:31:39.706424359 +0000 UTC m=+0.470162903 container create 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:31:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:40 compute-0 systemd[1]: Started libpod-conmon-69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc.scope.
Oct 01 13:31:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:40 compute-0 ceph-mon[74802]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:41 compute-0 podman[262362]: 2025-10-01 13:31:41.026019784 +0000 UTC m=+1.789758318 container init 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:31:41 compute-0 podman[262362]: 2025-10-01 13:31:41.037257571 +0000 UTC m=+1.800996125 container start 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:31:41 compute-0 podman[262362]: 2025-10-01 13:31:41.484310218 +0000 UTC m=+2.248048772 container attach 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:31:42 compute-0 heuristic_black[262378]: {
Oct 01 13:31:42 compute-0 heuristic_black[262378]:     "0": [
Oct 01 13:31:42 compute-0 heuristic_black[262378]:         {
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "devices": [
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "/dev/loop3"
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             ],
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_name": "ceph_lv0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_size": "21470642176",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "name": "ceph_lv0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "tags": {
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.cluster_name": "ceph",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.crush_device_class": "",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.encrypted": "0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.osd_id": "0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.type": "block",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.vdo": "0"
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             },
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "type": "block",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "vg_name": "ceph_vg0"
Oct 01 13:31:42 compute-0 heuristic_black[262378]:         }
Oct 01 13:31:42 compute-0 heuristic_black[262378]:     ],
Oct 01 13:31:42 compute-0 heuristic_black[262378]:     "1": [
Oct 01 13:31:42 compute-0 heuristic_black[262378]:         {
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "devices": [
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "/dev/loop4"
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             ],
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_name": "ceph_lv1",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_size": "21470642176",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "name": "ceph_lv1",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "tags": {
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.cluster_name": "ceph",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.crush_device_class": "",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.encrypted": "0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.osd_id": "1",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.type": "block",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.vdo": "0"
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             },
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "type": "block",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "vg_name": "ceph_vg1"
Oct 01 13:31:42 compute-0 heuristic_black[262378]:         }
Oct 01 13:31:42 compute-0 heuristic_black[262378]:     ],
Oct 01 13:31:42 compute-0 heuristic_black[262378]:     "2": [
Oct 01 13:31:42 compute-0 heuristic_black[262378]:         {
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "devices": [
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "/dev/loop5"
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             ],
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_name": "ceph_lv2",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_size": "21470642176",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "name": "ceph_lv2",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "tags": {
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.cluster_name": "ceph",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.crush_device_class": "",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.encrypted": "0",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.osd_id": "2",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.type": "block",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:                 "ceph.vdo": "0"
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             },
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "type": "block",
Oct 01 13:31:42 compute-0 heuristic_black[262378]:             "vg_name": "ceph_vg2"
Oct 01 13:31:42 compute-0 heuristic_black[262378]:         }
Oct 01 13:31:42 compute-0 heuristic_black[262378]:     ]
Oct 01 13:31:42 compute-0 heuristic_black[262378]: }
Oct 01 13:31:42 compute-0 systemd[1]: libpod-69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc.scope: Deactivated successfully.
Oct 01 13:31:42 compute-0 podman[262362]: 2025-10-01 13:31:42.153311728 +0000 UTC m=+2.917050272 container died 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 13:31:42 compute-0 ceph-mon[74802]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3c54575d5bf37ced6b94738d59a69bd7bc2ca876c93f9bbcbf964a4b292126e-merged.mount: Deactivated successfully.
Oct 01 13:31:44 compute-0 ceph-mon[74802]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:31:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Oct 01 13:31:45 compute-0 podman[262362]: 2025-10-01 13:31:45.161529192 +0000 UTC m=+5.925267756 container remove 69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:31:45 compute-0 sudo[262258]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:45 compute-0 systemd[1]: libpod-conmon-69e7b0b39f3f5c27fdf9ca48701f9fa21df900a36942fd24b307fe51b344bfcc.scope: Deactivated successfully.
Oct 01 13:31:45 compute-0 sudo[262400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:45 compute-0 sudo[262400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:45 compute-0 sudo[262400]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:45 compute-0 sudo[262425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:31:45 compute-0 sudo[262425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:45 compute-0 sudo[262425]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:45 compute-0 sudo[262450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:45 compute-0 sudo[262450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:45 compute-0 sudo[262450]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:45 compute-0 sudo[262475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:31:45 compute-0 sudo[262475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:45 compute-0 podman[262542]: 2025-10-01 13:31:45.89862856 +0000 UTC m=+0.027975302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:31:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Oct 01 13:31:46 compute-0 podman[262542]: 2025-10-01 13:31:46.607216359 +0000 UTC m=+0.736563081 container create e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:31:46 compute-0 ceph-mon[74802]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Oct 01 13:31:47 compute-0 systemd[1]: Started libpod-conmon-e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996.scope.
Oct 01 13:31:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:31:47 compute-0 podman[262542]: 2025-10-01 13:31:47.699939544 +0000 UTC m=+1.829286286 container init e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:31:47 compute-0 podman[262542]: 2025-10-01 13:31:47.70925273 +0000 UTC m=+1.838599442 container start e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:31:47 compute-0 ecstatic_sanderson[262559]: 167 167
Oct 01 13:31:47 compute-0 systemd[1]: libpod-e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996.scope: Deactivated successfully.
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:31:47
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.meta']
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:31:47 compute-0 ceph-mon[74802]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Oct 01 13:31:47 compute-0 podman[262542]: 2025-10-01 13:31:47.834229317 +0000 UTC m=+1.963576049 container attach e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:31:47 compute-0 podman[262542]: 2025-10-01 13:31:47.836537401 +0000 UTC m=+1.965884143 container died e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:31:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3db817f22e3e1daec8a995e51c51035068c98fe382e89745fede2374e375f4df-merged.mount: Deactivated successfully.
Oct 01 13:31:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:48 compute-0 podman[262542]: 2025-10-01 13:31:48.514578199 +0000 UTC m=+2.643924951 container remove e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_sanderson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct 01 13:31:48 compute-0 systemd[1]: libpod-conmon-e1760aa045408796d7540c2de61dfe19bfdec4e65732a42ea6885da1cd2ea996.scope: Deactivated successfully.
Oct 01 13:31:48 compute-0 podman[262584]: 2025-10-01 13:31:48.674982744 +0000 UTC m=+0.030341017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:31:49 compute-0 podman[262584]: 2025-10-01 13:31:49.052033052 +0000 UTC m=+0.407391345 container create 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:31:49 compute-0 systemd[1]: Started libpod-conmon-0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316.scope.
Oct 01 13:31:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:31:49 compute-0 ceph-mon[74802]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:50 compute-0 podman[262584]: 2025-10-01 13:31:50.092903437 +0000 UTC m=+1.448261740 container init 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:31:50 compute-0 podman[262584]: 2025-10-01 13:31:50.105193799 +0000 UTC m=+1.460552092 container start 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:31:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Oct 01 13:31:50 compute-0 podman[262584]: 2025-10-01 13:31:50.395786696 +0000 UTC m=+1.751144979 container attach 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:31:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]: {
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "osd_id": 0,
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "type": "bluestore"
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:     },
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "osd_id": 2,
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "type": "bluestore"
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:     },
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "osd_id": 1,
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:         "type": "bluestore"
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]:     }
Oct 01 13:31:51 compute-0 lucid_mirzakhani[262600]: }
Oct 01 13:31:51 compute-0 systemd[1]: libpod-0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316.scope: Deactivated successfully.
Oct 01 13:31:51 compute-0 systemd[1]: libpod-0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316.scope: Consumed 1.054s CPU time.
Oct 01 13:31:51 compute-0 podman[262584]: 2025-10-01 13:31:51.153994785 +0000 UTC m=+2.509353048 container died 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:31:51 compute-0 ceph-mon[74802]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Oct 01 13:31:51 compute-0 unix_chkpwd[262645]: password check failed for user (root)
Oct 01 13:31:51 compute-0 sshd-session[262643]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46  user=root
Oct 01 13:31:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-38d2c99a39c046a1b1c02cacc00f0fddbdb88f122c44ef87bbd908d67bfc75e3-merged.mount: Deactivated successfully.
Oct 01 13:31:53 compute-0 sshd-session[262643]: Failed password for root from 156.236.31.46 port 45470 ssh2
Oct 01 13:31:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 0 B/s wr, 7 op/s
Oct 01 13:31:54 compute-0 podman[262584]: 2025-10-01 13:31:54.406876905 +0000 UTC m=+5.762235188 container remove 0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:31:54 compute-0 sudo[262475]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:31:54 compute-0 systemd[1]: libpod-conmon-0c319d3fdb18f210094402a1adc45419168e357e11076ddafcb5a98aa0ed7316.scope: Deactivated successfully.
Oct 01 13:31:54 compute-0 ceph-mon[74802]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:31:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2707511699' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:31:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:31:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2707511699' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:31:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.312 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.336 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.336 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.337 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.337 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.338 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.371 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.372 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.372 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:31:55 compute-0 nova_compute[260022]: 2025-10-01 13:31:55.373 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:31:55 compute-0 sshd-session[262643]: Received disconnect from 156.236.31.46 port 45470:11: Bye Bye [preauth]
Oct 01 13:31:55 compute-0 sshd-session[262643]: Disconnected from authenticating user root 156.236.31.46 port 45470 [preauth]
Oct 01 13:31:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:55 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ec3c167b-9f34-4db5-a3fc-cae7b3db6187 does not exist
Oct 01 13:31:55 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 797e4a4e-6d79-4f0f-b5ff-bdf9638a6f15 does not exist
Oct 01 13:31:55 compute-0 ceph-mon[74802]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 0 B/s wr, 7 op/s
Oct 01 13:31:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2707511699' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:31:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2707511699' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:31:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:55 compute-0 sudo[262667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:31:55 compute-0 sudo[262667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:55 compute-0 sudo[262667]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:55 compute-0 sudo[262692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:31:55 compute-0 sudo[262692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:31:55 compute-0 sudo[262692]: pam_unix(sudo:session): session closed for user root
Oct 01 13:31:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:31:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595402553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.047 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:31:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.283 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.284 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.284 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.285 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:31:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.404 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.405 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.436 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:31:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:31:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047125485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.941 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.949 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.966 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.967 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.968 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.977 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.978 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.978 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:31:56 compute-0 nova_compute[260022]: 2025-10-01 13:31:56.978 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:31:57 compute-0 nova_compute[260022]: 2025-10-01 13:31:57.003 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:31:57 compute-0 nova_compute[260022]: 2025-10-01 13:31:57.004 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:57 compute-0 nova_compute[260022]: 2025-10-01 13:31:57.004 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:57 compute-0 nova_compute[260022]: 2025-10-01 13:31:57.004 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:31:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:31:57 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3595402553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:31:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:31:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 8 op/s
Oct 01 13:31:58 compute-0 ceph-mon[74802]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:31:58 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2047125485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:32:00 compute-0 ceph-mon[74802]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 8 op/s
Oct 01 13:32:00 compute-0 sshd-session[262741]: Invalid user mobile from 80.253.31.232 port 41410
Oct 01 13:32:00 compute-0 sshd-session[262741]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:32:00 compute-0 sshd-session[262741]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:32:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Oct 01 13:32:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:01 compute-0 ceph-mon[74802]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Oct 01 13:32:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Oct 01 13:32:02 compute-0 sshd-session[262741]: Failed password for invalid user mobile from 80.253.31.232 port 41410 ssh2
Oct 01 13:32:03 compute-0 sshd-session[262741]: Received disconnect from 80.253.31.232 port 41410:11: Bye Bye [preauth]
Oct 01 13:32:03 compute-0 sshd-session[262741]: Disconnected from invalid user mobile 80.253.31.232 port 41410 [preauth]
Oct 01 13:32:03 compute-0 podman[262744]: 2025-10-01 13:32:03.527246432 +0000 UTC m=+0.069075340 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 01 13:32:03 compute-0 podman[262750]: 2025-10-01 13:32:03.536453654 +0000 UTC m=+0.059157623 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 13:32:03 compute-0 podman[262745]: 2025-10-01 13:32:03.564147686 +0000 UTC m=+0.097681290 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:32:03 compute-0 podman[262743]: 2025-10-01 13:32:03.568759383 +0000 UTC m=+0.114069331 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct 01 13:32:04 compute-0 ceph-mon[74802]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Oct 01 13:32:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Oct 01 13:32:05 compute-0 sshd-session[262823]: Invalid user noroot from 27.254.137.144 port 40022
Oct 01 13:32:05 compute-0 sshd-session[262823]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:32:05 compute-0 sshd-session[262823]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:32:05 compute-0 ceph-mon[74802]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Oct 01 13:32:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 0 B/s wr, 7 op/s
Oct 01 13:32:07 compute-0 unix_chkpwd[262827]: password check failed for user (root)
Oct 01 13:32:07 compute-0 sshd-session[262825]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139  user=root
Oct 01 13:32:07 compute-0 sshd-session[262823]: Failed password for invalid user noroot from 27.254.137.144 port 40022 ssh2
Oct 01 13:32:07 compute-0 ceph-mon[74802]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 0 B/s wr, 7 op/s
Oct 01 13:32:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Oct 01 13:32:08 compute-0 sshd-session[262825]: Failed password for root from 200.7.101.139 port 33444 ssh2
Oct 01 13:32:09 compute-0 sshd-session[262825]: Received disconnect from 200.7.101.139 port 33444:11: Bye Bye [preauth]
Oct 01 13:32:09 compute-0 sshd-session[262825]: Disconnected from authenticating user root 200.7.101.139 port 33444 [preauth]
Oct 01 13:32:09 compute-0 ceph-mon[74802]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Oct 01 13:32:09 compute-0 sshd-session[262823]: Received disconnect from 27.254.137.144 port 40022:11: Bye Bye [preauth]
Oct 01 13:32:09 compute-0 sshd-session[262823]: Disconnected from invalid user noroot 27.254.137.144 port 40022 [preauth]
Oct 01 13:32:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 15 op/s
Oct 01 13:32:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:11 compute-0 ceph-mon[74802]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 15 op/s
Oct 01 13:32:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:32:12.299 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:32:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:32:12.300 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:32:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:32:12.300 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:32:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 01 13:32:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 01 13:32:12 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/690996490' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 01 13:32:12 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14359 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 01 13:32:12 compute-0 ceph-mgr[75103]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 01 13:32:12 compute-0 ceph-mgr[75103]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 01 13:32:13 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/690996490' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 01 13:32:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 01 13:32:14 compute-0 ceph-mon[74802]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 01 13:32:14 compute-0 ceph-mon[74802]: from='client.14359 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 01 13:32:16 compute-0 ceph-mon[74802]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct 01 13:32:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 01 13:32:17 compute-0 ceph-mon[74802]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 01 13:32:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:32:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:32:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:32:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:32:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:32:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:32:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Oct 01 13:32:19 compute-0 ceph-mon[74802]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Oct 01 13:32:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Oct 01 13:32:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:22 compute-0 ceph-mon[74802]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Oct 01 13:32:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Oct 01 13:32:23 compute-0 ceph-mon[74802]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Oct 01 13:32:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:32:25 compute-0 ceph-mon[74802]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Oct 01 13:32:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:32:27 compute-0 ceph-mon[74802]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:32:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:32:29 compute-0 ceph-mon[74802]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:32:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct 01 13:32:30 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1087025162' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 01 13:32:30 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.14361 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 01 13:32:30 compute-0 ceph-mgr[75103]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 01 13:32:30 compute-0 ceph-mgr[75103]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct 01 13:32:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:30 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1087025162' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 01 13:32:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:31 compute-0 ceph-mon[74802]: from='client.14361 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 01 13:32:31 compute-0 ceph-mon[74802]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:33 compute-0 ceph-mon[74802]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:34 compute-0 podman[262831]: 2025-10-01 13:32:34.557709422 +0000 UTC m=+0.089264042 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 01 13:32:34 compute-0 podman[262830]: 2025-10-01 13:32:34.562770603 +0000 UTC m=+0.100140138 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:32:34 compute-0 podman[262829]: 2025-10-01 13:32:34.581214529 +0000 UTC m=+0.122650894 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:32:34 compute-0 podman[262828]: 2025-10-01 13:32:34.591103034 +0000 UTC m=+0.137840778 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct 01 13:32:36 compute-0 ceph-mon[74802]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:37 compute-0 ceph-mon[74802]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:40 compute-0 ceph-mon[74802]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:41 compute-0 ceph-mon[74802]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:43 compute-0 ceph-mon[74802]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:45 compute-0 ceph-mon[74802]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:32:47
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', 'default.rgw.log', 'default.rgw.meta']
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:32:47 compute-0 ceph-mon[74802]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:32:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:32:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:49 compute-0 ceph-mon[74802]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:51 compute-0 ceph-mon[74802]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:54 compute-0 ceph-mon[74802]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:54 compute-0 nova_compute[260022]: 2025-10-01 13:32:54.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:32:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:32:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1953406059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:32:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:32:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1953406059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:32:55 compute-0 ceph-mon[74802]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:55 compute-0 PackageKit[192306]: daemon quit
Oct 01 13:32:55 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.343 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.489 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.489 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.490 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.490 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.490 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:32:55 compute-0 sudo[262931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:32:55 compute-0 sudo[262931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:55 compute-0 sudo[262931]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:32:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/945745090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:32:55 compute-0 nova_compute[260022]: 2025-10-01 13:32:55.991 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:32:56 compute-0 sudo[262957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:32:56 compute-0 sudo[262957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:56 compute-0 sudo[262957]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:56 compute-0 sudo[262983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:32:56 compute-0 sudo[262983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:56 compute-0 sudo[262983]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:56 compute-0 nova_compute[260022]: 2025-10-01 13:32:56.157 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:32:56 compute-0 nova_compute[260022]: 2025-10-01 13:32:56.159 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5211MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:32:56 compute-0 nova_compute[260022]: 2025-10-01 13:32:56.159 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:32:56 compute-0 nova_compute[260022]: 2025-10-01 13:32:56.159 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:32:56 compute-0 sudo[263008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:32:56 compute-0 sudo[263008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:32:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1953406059' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:32:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1953406059' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:32:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/945745090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:32:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:56 compute-0 nova_compute[260022]: 2025-10-01 13:32:56.388 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:32:56 compute-0 nova_compute[260022]: 2025-10-01 13:32:56.389 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:32:56 compute-0 nova_compute[260022]: 2025-10-01 13:32:56.407 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:32:56 compute-0 podman[263121]: 2025-10-01 13:32:56.876615805 +0000 UTC m=+0.151008727 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:32:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:32:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729839456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:32:56 compute-0 nova_compute[260022]: 2025-10-01 13:32:56.909 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:32:56 compute-0 nova_compute[260022]: 2025-10-01 13:32:56.916 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:32:57 compute-0 podman[263121]: 2025-10-01 13:32:57.059215346 +0000 UTC m=+0.333608208 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:32:57 compute-0 nova_compute[260022]: 2025-10-01 13:32:57.103 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:32:57 compute-0 nova_compute[260022]: 2025-10-01 13:32:57.107 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:32:57 compute-0 nova_compute[260022]: 2025-10-01 13:32:57.108 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:32:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:32:57 compute-0 ceph-mon[74802]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:57 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2729839456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:32:58 compute-0 nova_compute[260022]: 2025-10-01 13:32:58.111 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:32:58 compute-0 nova_compute[260022]: 2025-10-01 13:32:58.111 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:32:58 compute-0 nova_compute[260022]: 2025-10-01 13:32:58.112 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:32:58 compute-0 sudo[263008]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:32:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:32:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:32:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:32:58 compute-0 nova_compute[260022]: 2025-10-01 13:32:58.241 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:32:58 compute-0 nova_compute[260022]: 2025-10-01 13:32:58.241 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:32:58 compute-0 nova_compute[260022]: 2025-10-01 13:32:58.242 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:32:58 compute-0 sudo[263280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:32:58 compute-0 sudo[263280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:58 compute-0 sudo[263280]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:58 compute-0 sudo[263305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:32:58 compute-0 sudo[263305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:58 compute-0 sudo[263305]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:58 compute-0 sudo[263330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:32:58 compute-0 sudo[263330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:58 compute-0 sudo[263330]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:58 compute-0 sudo[263355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:32:58 compute-0 sudo[263355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:59 compute-0 sudo[263355]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:32:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:32:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:32:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:32:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:32:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:32:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8d52c35e-ddc1-4050-ade1-f3501704b1ae does not exist
Oct 01 13:32:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a6f1498-2d97-4bd5-9abf-510b7e1e4f36 does not exist
Oct 01 13:32:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ba4af2b8-f3b9-42fc-94ce-c9f42d7e9b25 does not exist
Oct 01 13:32:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:32:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:32:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:32:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:32:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:32:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:32:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:32:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:32:59 compute-0 ceph-mon[74802]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:32:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:32:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:32:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:32:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:32:59 compute-0 sudo[263410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:32:59 compute-0 sudo[263410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:59 compute-0 sudo[263410]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:59 compute-0 sudo[263435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:32:59 compute-0 sudo[263435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:59 compute-0 sudo[263435]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:59 compute-0 sudo[263460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:32:59 compute-0 sudo[263460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:59 compute-0 sudo[263460]: pam_unix(sudo:session): session closed for user root
Oct 01 13:32:59 compute-0 sudo[263485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:32:59 compute-0 sudo[263485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:32:59 compute-0 podman[263550]: 2025-10-01 13:32:59.934972944 +0000 UTC m=+0.049488165 container create 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 13:32:59 compute-0 systemd[1]: Started libpod-conmon-3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed.scope.
Oct 01 13:33:00 compute-0 podman[263550]: 2025-10-01 13:32:59.913402498 +0000 UTC m=+0.027917809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:33:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:33:00 compute-0 podman[263550]: 2025-10-01 13:33:00.036453444 +0000 UTC m=+0.150968775 container init 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:33:00 compute-0 podman[263550]: 2025-10-01 13:33:00.048681163 +0000 UTC m=+0.163196404 container start 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:33:00 compute-0 podman[263550]: 2025-10-01 13:33:00.053067033 +0000 UTC m=+0.167582304 container attach 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:33:00 compute-0 funny_hamilton[263566]: 167 167
Oct 01 13:33:00 compute-0 systemd[1]: libpod-3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed.scope: Deactivated successfully.
Oct 01 13:33:00 compute-0 podman[263550]: 2025-10-01 13:33:00.058427893 +0000 UTC m=+0.172943144 container died 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9e955de163194503e6de1fc6e6d7efcca40ce5e8adebcd6f7b69c767706c2d1-merged.mount: Deactivated successfully.
Oct 01 13:33:00 compute-0 podman[263550]: 2025-10-01 13:33:00.11771928 +0000 UTC m=+0.232234511 container remove 3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:33:00 compute-0 systemd[1]: libpod-conmon-3bbfa49fa6612211b6314cf261fd2e6184a6b356a0f7dacb979e71a954b8c8ed.scope: Deactivated successfully.
Oct 01 13:33:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:33:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:33:00 compute-0 podman[263593]: 2025-10-01 13:33:00.316978062 +0000 UTC m=+0.069640099 container create 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:33:00 compute-0 podman[263593]: 2025-10-01 13:33:00.27482563 +0000 UTC m=+0.027487727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:33:00 compute-0 systemd[1]: Started libpod-conmon-8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3.scope.
Oct 01 13:33:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:00 compute-0 podman[263593]: 2025-10-01 13:33:00.41812717 +0000 UTC m=+0.170789247 container init 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:33:00 compute-0 podman[263593]: 2025-10-01 13:33:00.437715293 +0000 UTC m=+0.190377350 container start 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 13:33:00 compute-0 podman[263593]: 2025-10-01 13:33:00.453604779 +0000 UTC m=+0.206266836 container attach 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:33:00 compute-0 sshd-session[263584]: Invalid user gt from 156.236.31.46 port 45554
Oct 01 13:33:00 compute-0 sshd-session[263584]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:33:00 compute-0 sshd-session[263584]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=156.236.31.46
Oct 01 13:33:00 compute-0 unix_chkpwd[263616]: password check failed for user (root)
Oct 01 13:33:00 compute-0 sshd-session[263577]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232  user=root
Oct 01 13:33:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:01 compute-0 ceph-mon[74802]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:01 compute-0 strange_merkle[263611]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:33:01 compute-0 strange_merkle[263611]: --> relative data size: 1.0
Oct 01 13:33:01 compute-0 strange_merkle[263611]: --> All data devices are unavailable
Oct 01 13:33:01 compute-0 systemd[1]: libpod-8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3.scope: Deactivated successfully.
Oct 01 13:33:01 compute-0 systemd[1]: libpod-8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3.scope: Consumed 1.197s CPU time.
Oct 01 13:33:01 compute-0 podman[263641]: 2025-10-01 13:33:01.738878722 +0000 UTC m=+0.026507975 container died 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 13:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3436738dbb72d955e2a6d1545077270ed1afbeeee6c8e3b95ea8fdf26835cfc0-merged.mount: Deactivated successfully.
Oct 01 13:33:01 compute-0 podman[263641]: 2025-10-01 13:33:01.876804461 +0000 UTC m=+0.164433694 container remove 8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:33:01 compute-0 systemd[1]: libpod-conmon-8e0afac76066aaa7988115c13f9f96f9342873e9e918c73ae82ee9c5318fb4c3.scope: Deactivated successfully.
Oct 01 13:33:01 compute-0 sudo[263485]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:01 compute-0 sudo[263656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:33:02 compute-0 sudo[263656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:02 compute-0 sudo[263656]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:02 compute-0 sudo[263681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:33:02 compute-0 sudo[263681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:02 compute-0 sudo[263681]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:02 compute-0 sudo[263706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:33:02 compute-0 sudo[263706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:02 compute-0 sudo[263706]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:02 compute-0 sudo[263731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:33:02 compute-0 sudo[263731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:02 compute-0 podman[263796]: 2025-10-01 13:33:02.632331855 +0000 UTC m=+0.030154711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:33:02 compute-0 podman[263796]: 2025-10-01 13:33:02.729328951 +0000 UTC m=+0.127151757 container create ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:33:02 compute-0 systemd[1]: Started libpod-conmon-ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6.scope.
Oct 01 13:33:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:33:02 compute-0 podman[263796]: 2025-10-01 13:33:02.869623366 +0000 UTC m=+0.267446222 container init ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:33:02 compute-0 podman[263796]: 2025-10-01 13:33:02.882345921 +0000 UTC m=+0.280168727 container start ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:33:02 compute-0 condescending_nightingale[263813]: 167 167
Oct 01 13:33:02 compute-0 systemd[1]: libpod-ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6.scope: Deactivated successfully.
Oct 01 13:33:02 compute-0 podman[263796]: 2025-10-01 13:33:02.908809923 +0000 UTC m=+0.306632790 container attach ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:33:02 compute-0 podman[263796]: 2025-10-01 13:33:02.910268699 +0000 UTC m=+0.308091505 container died ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:33:02 compute-0 sshd-session[263584]: Failed password for invalid user gt from 156.236.31.46 port 45554 ssh2
Oct 01 13:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-153daa18da67583dca5029c9acbfb1c93530dcffbada4cee9217df1e0c0d355d-merged.mount: Deactivated successfully.
Oct 01 13:33:03 compute-0 sshd-session[263577]: Failed password for root from 80.253.31.232 port 50352 ssh2
Oct 01 13:33:03 compute-0 podman[263796]: 2025-10-01 13:33:03.055033287 +0000 UTC m=+0.452856093 container remove ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:33:03 compute-0 systemd[1]: libpod-conmon-ec46058295f2931318069322bca3639c78af3a6aca79de7f294141efcdf362a6.scope: Deactivated successfully.
Oct 01 13:33:03 compute-0 sshd-session[263584]: Received disconnect from 156.236.31.46 port 45554:11: Bye Bye [preauth]
Oct 01 13:33:03 compute-0 sshd-session[263584]: Disconnected from invalid user gt 156.236.31.46 port 45554 [preauth]
Oct 01 13:33:03 compute-0 podman[263839]: 2025-10-01 13:33:03.267198009 +0000 UTC m=+0.046872253 container create 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:33:03 compute-0 systemd[1]: Started libpod-conmon-28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c.scope.
Oct 01 13:33:03 compute-0 podman[263839]: 2025-10-01 13:33:03.245971933 +0000 UTC m=+0.025646167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:33:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:03 compute-0 podman[263839]: 2025-10-01 13:33:03.372296053 +0000 UTC m=+0.151970367 container init 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:33:03 compute-0 podman[263839]: 2025-10-01 13:33:03.389672186 +0000 UTC m=+0.169346440 container start 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 13:33:03 compute-0 podman[263839]: 2025-10-01 13:33:03.40205293 +0000 UTC m=+0.181727244 container attach 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:33:03 compute-0 ceph-mon[74802]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]: {
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:     "0": [
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:         {
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "devices": [
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "/dev/loop3"
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             ],
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_name": "ceph_lv0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_size": "21470642176",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "name": "ceph_lv0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "tags": {
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.cluster_name": "ceph",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.crush_device_class": "",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.encrypted": "0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.osd_id": "0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.type": "block",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.vdo": "0"
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             },
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "type": "block",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "vg_name": "ceph_vg0"
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:         }
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:     ],
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:     "1": [
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:         {
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "devices": [
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "/dev/loop4"
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             ],
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_name": "ceph_lv1",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_size": "21470642176",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "name": "ceph_lv1",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "tags": {
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.cluster_name": "ceph",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.crush_device_class": "",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.encrypted": "0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.osd_id": "1",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.type": "block",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.vdo": "0"
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             },
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "type": "block",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "vg_name": "ceph_vg1"
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:         }
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:     ],
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:     "2": [
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:         {
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "devices": [
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "/dev/loop5"
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             ],
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_name": "ceph_lv2",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_size": "21470642176",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "name": "ceph_lv2",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "tags": {
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.cluster_name": "ceph",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.crush_device_class": "",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.encrypted": "0",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.osd_id": "2",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.type": "block",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:                 "ceph.vdo": "0"
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             },
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "type": "block",
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:             "vg_name": "ceph_vg2"
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:         }
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]:     ]
Oct 01 13:33:04 compute-0 vigilant_almeida[263855]: }
Oct 01 13:33:04 compute-0 systemd[1]: libpod-28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c.scope: Deactivated successfully.
Oct 01 13:33:04 compute-0 podman[263839]: 2025-10-01 13:33:04.237635861 +0000 UTC m=+1.017310115 container died 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d91a94b34eaeda0eaf8491b6ef83b68cf34aebc7b95e573094bc95b644cd4f9-merged.mount: Deactivated successfully.
Oct 01 13:33:04 compute-0 podman[263839]: 2025-10-01 13:33:04.346539357 +0000 UTC m=+1.126213571 container remove 28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_almeida, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:33:04 compute-0 systemd[1]: libpod-conmon-28fc32aeb31401a5a25e6ddfcb1331555dfdf0ae90272ae41111536beb9ee02c.scope: Deactivated successfully.
Oct 01 13:33:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:04 compute-0 sudo[263731]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:04 compute-0 sudo[263879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:33:04 compute-0 sudo[263879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:04 compute-0 sudo[263879]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:04 compute-0 sshd-session[263577]: Received disconnect from 80.253.31.232 port 50352:11: Bye Bye [preauth]
Oct 01 13:33:04 compute-0 sshd-session[263577]: Disconnected from authenticating user root 80.253.31.232 port 50352 [preauth]
Oct 01 13:33:04 compute-0 sudo[263904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:33:04 compute-0 sudo[263904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:04 compute-0 sudo[263904]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:04 compute-0 podman[263930]: 2025-10-01 13:33:04.730152695 +0000 UTC m=+0.071825926 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 01 13:33:04 compute-0 sudo[263962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:33:04 compute-0 sudo[263962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:04 compute-0 podman[263929]: 2025-10-01 13:33:04.758948432 +0000 UTC m=+0.095611104 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd)
Oct 01 13:33:04 compute-0 sudo[263962]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:04 compute-0 podman[263931]: 2025-10-01 13:33:04.77052513 +0000 UTC m=+0.100319053 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 01 13:33:04 compute-0 podman[263928]: 2025-10-01 13:33:04.837645066 +0000 UTC m=+0.182210159 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 01 13:33:04 compute-0 sudo[264027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:33:04 compute-0 sudo[264027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:05 compute-0 podman[264094]: 2025-10-01 13:33:05.272193556 +0000 UTC m=+0.060994273 container create 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:33:05 compute-0 systemd[1]: Started libpod-conmon-3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565.scope.
Oct 01 13:33:05 compute-0 podman[264094]: 2025-10-01 13:33:05.241630773 +0000 UTC m=+0.030431540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:33:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:33:05 compute-0 podman[264094]: 2025-10-01 13:33:05.387243547 +0000 UTC m=+0.176044314 container init 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:33:05 compute-0 podman[264094]: 2025-10-01 13:33:05.40305246 +0000 UTC m=+0.191853177 container start 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:33:05 compute-0 podman[264094]: 2025-10-01 13:33:05.408306037 +0000 UTC m=+0.197106794 container attach 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:33:05 compute-0 admiring_panini[264111]: 167 167
Oct 01 13:33:05 compute-0 systemd[1]: libpod-3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565.scope: Deactivated successfully.
Oct 01 13:33:05 compute-0 podman[264094]: 2025-10-01 13:33:05.413920145 +0000 UTC m=+0.202720882 container died 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-77ad76d600f919b5d7ef41e268918e7ca2edca47ebf28562c8ee7d8b2bde84a9-merged.mount: Deactivated successfully.
Oct 01 13:33:05 compute-0 podman[264094]: 2025-10-01 13:33:05.466800548 +0000 UTC m=+0.255601225 container remove 3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:33:05 compute-0 ceph-mon[74802]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:05 compute-0 systemd[1]: libpod-conmon-3e047a689fba8f16eddad94a967e4c46ac25b63e89f642c3c0cf25c3bf7d3565.scope: Deactivated successfully.
Oct 01 13:33:05 compute-0 podman[264134]: 2025-10-01 13:33:05.630596191 +0000 UTC m=+0.028254900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:33:05 compute-0 podman[264134]: 2025-10-01 13:33:05.742597885 +0000 UTC m=+0.140256594 container create f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:33:05 compute-0 systemd[1]: Started libpod-conmon-f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd.scope.
Oct 01 13:33:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:33:06 compute-0 podman[264134]: 2025-10-01 13:33:06.093384288 +0000 UTC m=+0.491043047 container init f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:33:06 compute-0 podman[264134]: 2025-10-01 13:33:06.10599129 +0000 UTC m=+0.503649969 container start f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 13:33:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:06 compute-0 podman[264134]: 2025-10-01 13:33:06.24897265 +0000 UTC m=+0.646631339 container attach f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct 01 13:33:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]: {
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "osd_id": 0,
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "type": "bluestore"
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:     },
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "osd_id": 2,
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "type": "bluestore"
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:     },
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "osd_id": 1,
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:         "type": "bluestore"
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]:     }
Oct 01 13:33:07 compute-0 intelligent_mcclintock[264150]: }
Oct 01 13:33:07 compute-0 systemd[1]: libpod-f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd.scope: Deactivated successfully.
Oct 01 13:33:07 compute-0 systemd[1]: libpod-f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd.scope: Consumed 1.117s CPU time.
Oct 01 13:33:07 compute-0 podman[264134]: 2025-10-01 13:33:07.217450651 +0000 UTC m=+1.615109360 container died f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e13dcab1ca42adefd9ff72c7cdcb9decc7b85513e69bcbd4886fd2f184c8ed3-merged.mount: Deactivated successfully.
Oct 01 13:33:07 compute-0 podman[264134]: 2025-10-01 13:33:07.411908229 +0000 UTC m=+1.809566928 container remove f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:33:07 compute-0 systemd[1]: libpod-conmon-f6ac9ff5b769fa9b2acd10e2ae9566b5a092c15c1ba88bb85a7749129a08a2cd.scope: Deactivated successfully.
Oct 01 13:33:07 compute-0 sudo[264027]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:33:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:33:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:33:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:33:07 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6d69ae80-b7b7-4df9-b7c3-b015dad3bed1 does not exist
Oct 01 13:33:07 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 894b6b7f-a6fe-4b56-949c-a8d0aad55373 does not exist
Oct 01 13:33:07 compute-0 sudo[264198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:33:07 compute-0 sudo[264198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:07 compute-0 sudo[264198]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:07 compute-0 sudo[264223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:33:07 compute-0 sudo[264223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:33:07 compute-0 sudo[264223]: pam_unix(sudo:session): session closed for user root
Oct 01 13:33:08 compute-0 ceph-mon[74802]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:33:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:33:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:09 compute-0 ceph-mon[74802]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:11 compute-0 ceph-mon[74802]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:33:12.300 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:33:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:33:12.301 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:33:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:33:12.301 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:33:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:13 compute-0 ceph-mon[74802]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:15 compute-0 ceph-mon[74802]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:16 compute-0 sshd-session[264248]: Invalid user seekcy from 27.254.137.144 port 35590
Oct 01 13:33:16 compute-0 sshd-session[264248]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:33:16 compute-0 sshd-session[264248]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:33:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:17 compute-0 ceph-mon[74802]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:17 compute-0 sshd-session[264248]: Failed password for invalid user seekcy from 27.254.137.144 port 35590 ssh2
Oct 01 13:33:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:33:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:33:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:33:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:33:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:33:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:33:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:19 compute-0 sshd-session[264248]: Received disconnect from 27.254.137.144 port 35590:11: Bye Bye [preauth]
Oct 01 13:33:19 compute-0 sshd-session[264248]: Disconnected from invalid user seekcy 27.254.137.144 port 35590 [preauth]
Oct 01 13:33:20 compute-0 ceph-mon[74802]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:21 compute-0 ceph-mon[74802]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:23 compute-0 ceph-mon[74802]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:24 compute-0 sshd-session[264250]: Invalid user ubuntu from 200.7.101.139 port 36176
Oct 01 13:33:24 compute-0 sshd-session[264250]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:33:24 compute-0 sshd-session[264250]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139
Oct 01 13:33:26 compute-0 ceph-mon[74802]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:26 compute-0 sshd-session[264250]: Failed password for invalid user ubuntu from 200.7.101.139 port 36176 ssh2
Oct 01 13:33:27 compute-0 ceph-mon[74802]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:27 compute-0 sshd-session[264250]: Received disconnect from 200.7.101.139 port 36176:11: Bye Bye [preauth]
Oct 01 13:33:27 compute-0 sshd-session[264250]: Disconnected from invalid user ubuntu 200.7.101.139 port 36176 [preauth]
Oct 01 13:33:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:29 compute-0 ceph-mon[74802]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:31 compute-0 ceph-mon[74802]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:33 compute-0 ceph-mon[74802]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:35 compute-0 podman[264255]: 2025-10-01 13:33:35.535540974 +0000 UTC m=+0.068228893 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 13:33:35 compute-0 podman[264253]: 2025-10-01 13:33:35.552761741 +0000 UTC m=+0.096263544 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct 01 13:33:35 compute-0 podman[264254]: 2025-10-01 13:33:35.558277056 +0000 UTC m=+0.088074433 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid)
Oct 01 13:33:35 compute-0 podman[264252]: 2025-10-01 13:33:35.597184055 +0000 UTC m=+0.139472450 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:33:35 compute-0 ceph-mon[74802]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:37 compute-0 ceph-mon[74802]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:40 compute-0 ceph-mon[74802]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:41 compute-0 ceph-mon[74802]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:43 compute-0 ceph-mon[74802]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:45 compute-0 ceph-mon[74802]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:47 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:33:47.105 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:33:47 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:33:47.107 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:33:47 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:33:47.109 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:33:47 compute-0 ceph-mon[74802]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:33:47
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'volumes', 'images', 'backups', 'default.rgw.meta']
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:33:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:33:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:50 compute-0 ceph-mon[74802]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:51 compute-0 ceph-mon[74802]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:53 compute-0 ceph-mon[74802]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:54 compute-0 unix_chkpwd[264338]: password check failed for user (root)
Oct 01 13:33:54 compute-0 sshd-session[264336]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=14.103.127.7  user=root
Oct 01 13:33:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:33:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149275023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:33:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:33:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149275023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:33:55 compute-0 nova_compute[260022]: 2025-10-01 13:33:55.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:55 compute-0 nova_compute[260022]: 2025-10-01 13:33:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:55 compute-0 nova_compute[260022]: 2025-10-01 13:33:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:55 compute-0 nova_compute[260022]: 2025-10-01 13:33:55.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:33:56 compute-0 ceph-mon[74802]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/149275023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:33:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/149275023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:33:56 compute-0 nova_compute[260022]: 2025-10-01 13:33:56.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:56 compute-0 nova_compute[260022]: 2025-10-01 13:33:56.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:33:56 compute-0 nova_compute[260022]: 2025-10-01 13:33:56.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:33:56 compute-0 nova_compute[260022]: 2025-10-01 13:33:56.368 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:33:56 compute-0 nova_compute[260022]: 2025-10-01 13:33:56.369 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:33:57 compute-0 sshd-session[264336]: Failed password for root from 14.103.127.7 port 41010 ssh2
Oct 01 13:33:57 compute-0 ceph-mon[74802]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:33:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.368 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.391 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.392 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.392 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.393 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.393 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:33:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:33:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390611866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:33:57 compute-0 nova_compute[260022]: 2025-10-01 13:33:57.849 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.060 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.061 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.062 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.062 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.121 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.121 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.141 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:33:58 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2390611866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:33:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:33:58 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279081487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.608 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.615 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.630 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.633 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:33:58 compute-0 nova_compute[260022]: 2025-10-01 13:33:58.634 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:33:58 compute-0 sshd-session[264336]: Received disconnect from 14.103.127.7 port 41010:11: Bye Bye [preauth]
Oct 01 13:33:58 compute-0 sshd-session[264336]: Disconnected from authenticating user root 14.103.127.7 port 41010 [preauth]
Oct 01 13:33:59 compute-0 ceph-mon[74802]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:33:59 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1279081487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:34:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:01 compute-0 ceph-mon[74802]: pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:03 compute-0 ceph-mon[74802]: pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:06 compute-0 ceph-mon[74802]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:06 compute-0 podman[264386]: 2025-10-01 13:34:06.56113907 +0000 UTC m=+0.084931913 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 01 13:34:06 compute-0 podman[264385]: 2025-10-01 13:34:06.565214991 +0000 UTC m=+0.093965983 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 13:34:06 compute-0 podman[264384]: 2025-10-01 13:34:06.586124086 +0000 UTC m=+0.119878007 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:34:06 compute-0 podman[264383]: 2025-10-01 13:34:06.600819693 +0000 UTC m=+0.137576259 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 13:34:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:07 compute-0 ceph-mon[74802]: pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:07 compute-0 sshd-session[264462]: Invalid user test from 80.253.31.232 port 42764
Oct 01 13:34:07 compute-0 sshd-session[264462]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:34:07 compute-0 sshd-session[264462]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:34:07 compute-0 sudo[264464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:34:07 compute-0 sudo[264464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:07 compute-0 sudo[264464]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:07 compute-0 sudo[264489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:34:07 compute-0 sudo[264489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:07 compute-0 sudo[264489]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:07 compute-0 sudo[264514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:34:07 compute-0 sudo[264514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:07 compute-0 sudo[264514]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:08 compute-0 sudo[264539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:34:08 compute-0 sudo[264539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:08 compute-0 sudo[264539]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:34:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:34:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:34:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:34:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:34:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:34:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev bd509792-e458-4508-879e-a322088a4be0 does not exist
Oct 01 13:34:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2d5929be-0dcb-4a93-aad7-72def12fc9a5 does not exist
Oct 01 13:34:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f1aa3454-71bd-42d8-811f-e3cbcdd6240d does not exist
Oct 01 13:34:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:34:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:34:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:34:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:34:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:34:08 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:34:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:34:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:34:08 compute-0 sudo[264596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:34:08 compute-0 sudo[264596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:08 compute-0 sudo[264596]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:08 compute-0 sudo[264621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:34:08 compute-0 sudo[264621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:08 compute-0 sudo[264621]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:08 compute-0 sudo[264646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:34:08 compute-0 sudo[264646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:08 compute-0 sudo[264646]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:09 compute-0 sudo[264671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:34:09 compute-0 sudo[264671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:09 compute-0 podman[264734]: 2025-10-01 13:34:09.465367175 +0000 UTC m=+0.037498485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:34:09 compute-0 podman[264734]: 2025-10-01 13:34:09.558688674 +0000 UTC m=+0.130819944 container create 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:34:09 compute-0 sshd-session[264462]: Failed password for invalid user test from 80.253.31.232 port 42764 ssh2
Oct 01 13:34:09 compute-0 systemd[1]: Started libpod-conmon-89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2.scope.
Oct 01 13:34:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:34:10 compute-0 ceph-mon[74802]: pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:34:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:34:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:34:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:34:10 compute-0 podman[264734]: 2025-10-01 13:34:10.150662283 +0000 UTC m=+0.722793543 container init 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:34:10 compute-0 podman[264734]: 2025-10-01 13:34:10.160831157 +0000 UTC m=+0.732962427 container start 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:34:10 compute-0 clever_mclean[264750]: 167 167
Oct 01 13:34:10 compute-0 systemd[1]: libpod-89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2.scope: Deactivated successfully.
Oct 01 13:34:10 compute-0 podman[264734]: 2025-10-01 13:34:10.284674358 +0000 UTC m=+0.856805618 container attach 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:34:10 compute-0 podman[264734]: 2025-10-01 13:34:10.285892317 +0000 UTC m=+0.858023557 container died 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:34:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab7a3cfd24c1570e910cc4518f05d9fc3ef697350ccafb5bc63d02ddea356aa9-merged.mount: Deactivated successfully.
Oct 01 13:34:10 compute-0 podman[264734]: 2025-10-01 13:34:10.673504301 +0000 UTC m=+1.245635571 container remove 89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:34:10 compute-0 systemd[1]: libpod-conmon-89a4d3476aa331c17eda03f5d0b8d086325c0f887a0fe9ab357746e4c82b59c2.scope: Deactivated successfully.
Oct 01 13:34:10 compute-0 sshd-session[264462]: Received disconnect from 80.253.31.232 port 42764:11: Bye Bye [preauth]
Oct 01 13:34:10 compute-0 sshd-session[264462]: Disconnected from invalid user test 80.253.31.232 port 42764 [preauth]
Oct 01 13:34:10 compute-0 podman[264776]: 2025-10-01 13:34:10.893658478 +0000 UTC m=+0.042654518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:34:11 compute-0 podman[264776]: 2025-10-01 13:34:11.024403718 +0000 UTC m=+0.173399758 container create 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:34:11 compute-0 systemd[1]: Started libpod-conmon-98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0.scope.
Oct 01 13:34:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:11 compute-0 ceph-mon[74802]: pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:11 compute-0 podman[264776]: 2025-10-01 13:34:11.375541714 +0000 UTC m=+0.524537764 container init 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:34:11 compute-0 podman[264776]: 2025-10-01 13:34:11.388696232 +0000 UTC m=+0.537692272 container start 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:34:11 compute-0 podman[264776]: 2025-10-01 13:34:11.470956251 +0000 UTC m=+0.619952361 container attach 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:34:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:34:12.300 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:34:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:34:12.303 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:34:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:34:12.303 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:34:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:12 compute-0 mystifying_kapitsa[264793]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:34:12 compute-0 mystifying_kapitsa[264793]: --> relative data size: 1.0
Oct 01 13:34:12 compute-0 mystifying_kapitsa[264793]: --> All data devices are unavailable
Oct 01 13:34:12 compute-0 systemd[1]: libpod-98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0.scope: Deactivated successfully.
Oct 01 13:34:12 compute-0 systemd[1]: libpod-98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0.scope: Consumed 1.161s CPU time.
Oct 01 13:34:12 compute-0 podman[264822]: 2025-10-01 13:34:12.644251128 +0000 UTC m=+0.028910840 container died 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-617c14e086e6fc946cb9e847e5de3eb16cd380d59a47e3cf13f533f332598c88-merged.mount: Deactivated successfully.
Oct 01 13:34:12 compute-0 podman[264822]: 2025-10-01 13:34:12.701868213 +0000 UTC m=+0.086527875 container remove 98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:34:12 compute-0 systemd[1]: libpod-conmon-98784fe3377c799c7faa23c1125f802f9e9807f250cec12262f47afe9518aef0.scope: Deactivated successfully.
Oct 01 13:34:12 compute-0 sudo[264671]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:12 compute-0 sudo[264837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:34:12 compute-0 sudo[264837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:12 compute-0 sudo[264837]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:12 compute-0 sudo[264862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:34:12 compute-0 sudo[264862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:12 compute-0 sudo[264862]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:12 compute-0 sudo[264887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:34:12 compute-0 sudo[264887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:12 compute-0 sudo[264887]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:13 compute-0 sudo[264912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:34:13 compute-0 sudo[264912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:13 compute-0 podman[264978]: 2025-10-01 13:34:13.475557645 +0000 UTC m=+0.052701389 container create d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:34:13 compute-0 ceph-mon[74802]: pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:13 compute-0 systemd[1]: Started libpod-conmon-d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc.scope.
Oct 01 13:34:13 compute-0 podman[264978]: 2025-10-01 13:34:13.452545952 +0000 UTC m=+0.029689786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:34:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:34:13 compute-0 podman[264978]: 2025-10-01 13:34:13.576852438 +0000 UTC m=+0.153996272 container init d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:34:13 compute-0 podman[264978]: 2025-10-01 13:34:13.584483281 +0000 UTC m=+0.161627025 container start d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:34:13 compute-0 jovial_williamson[264994]: 167 167
Oct 01 13:34:13 compute-0 podman[264978]: 2025-10-01 13:34:13.590264205 +0000 UTC m=+0.167407999 container attach d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:34:13 compute-0 systemd[1]: libpod-d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc.scope: Deactivated successfully.
Oct 01 13:34:13 compute-0 podman[264978]: 2025-10-01 13:34:13.59293167 +0000 UTC m=+0.170075454 container died d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-77588ad29ab6cdb0f3cf2aae62b52abff392dea127236afe7cf62b1077ae2030-merged.mount: Deactivated successfully.
Oct 01 13:34:13 compute-0 podman[264978]: 2025-10-01 13:34:13.653962362 +0000 UTC m=+0.231106106 container remove d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:34:13 compute-0 systemd[1]: libpod-conmon-d6cfc1ec263e7b78351139598c3787bfd03190019d9730df7e3cca2d532a83bc.scope: Deactivated successfully.
Oct 01 13:34:13 compute-0 podman[265019]: 2025-10-01 13:34:13.874583963 +0000 UTC m=+0.049158875 container create 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:34:13 compute-0 systemd[1]: Started libpod-conmon-99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e.scope.
Oct 01 13:34:13 compute-0 podman[265019]: 2025-10-01 13:34:13.851045793 +0000 UTC m=+0.025620725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:34:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:13 compute-0 podman[265019]: 2025-10-01 13:34:13.982803627 +0000 UTC m=+0.157378559 container init 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:34:13 compute-0 podman[265019]: 2025-10-01 13:34:13.991666859 +0000 UTC m=+0.166241771 container start 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:34:14 compute-0 podman[265019]: 2025-10-01 13:34:14.000773579 +0000 UTC m=+0.175348521 container attach 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 13:34:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]: {
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:     "0": [
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:         {
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "devices": [
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "/dev/loop3"
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             ],
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_name": "ceph_lv0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_size": "21470642176",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "name": "ceph_lv0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "tags": {
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.cluster_name": "ceph",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.crush_device_class": "",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.encrypted": "0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.osd_id": "0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.type": "block",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.vdo": "0"
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             },
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "type": "block",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "vg_name": "ceph_vg0"
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:         }
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:     ],
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:     "1": [
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:         {
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "devices": [
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "/dev/loop4"
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             ],
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_name": "ceph_lv1",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_size": "21470642176",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "name": "ceph_lv1",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "tags": {
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.cluster_name": "ceph",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.crush_device_class": "",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.encrypted": "0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.osd_id": "1",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.type": "block",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.vdo": "0"
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             },
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "type": "block",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "vg_name": "ceph_vg1"
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:         }
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:     ],
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:     "2": [
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:         {
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "devices": [
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "/dev/loop5"
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             ],
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_name": "ceph_lv2",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_size": "21470642176",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "name": "ceph_lv2",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "tags": {
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.cluster_name": "ceph",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.crush_device_class": "",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.encrypted": "0",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.osd_id": "2",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.type": "block",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:                 "ceph.vdo": "0"
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             },
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "type": "block",
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:             "vg_name": "ceph_vg2"
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:         }
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]:     ]
Oct 01 13:34:14 compute-0 beautiful_elgamal[265036]: }
Oct 01 13:34:14 compute-0 systemd[1]: libpod-99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e.scope: Deactivated successfully.
Oct 01 13:34:14 compute-0 podman[265019]: 2025-10-01 13:34:14.782782456 +0000 UTC m=+0.957357368 container died 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 01 13:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e135f4c4cb2e99480615c22581ea69cf21b669c4ba2cc3b447bb33fa74fe3338-merged.mount: Deactivated successfully.
Oct 01 13:34:14 compute-0 podman[265019]: 2025-10-01 13:34:14.860890201 +0000 UTC m=+1.035465113 container remove 99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:34:14 compute-0 systemd[1]: libpod-conmon-99307b1c092a55d80c6127e6af5fe00b67d5349e87240ac459b22c63f04f924e.scope: Deactivated successfully.
Oct 01 13:34:14 compute-0 sudo[264912]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:14 compute-0 sudo[265058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:34:14 compute-0 sudo[265058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:14 compute-0 sudo[265058]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:15 compute-0 sudo[265083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:34:15 compute-0 sudo[265083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:15 compute-0 sudo[265083]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:15 compute-0 sudo[265108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:34:15 compute-0 sudo[265108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:15 compute-0 sudo[265108]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:15 compute-0 sudo[265133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:34:15 compute-0 sudo[265133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:15 compute-0 podman[265197]: 2025-10-01 13:34:15.501594031 +0000 UTC m=+0.042260016 container create 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:34:15 compute-0 ceph-mon[74802]: pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:15 compute-0 systemd[1]: Started libpod-conmon-47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac.scope.
Oct 01 13:34:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:34:15 compute-0 podman[265197]: 2025-10-01 13:34:15.48459545 +0000 UTC m=+0.025261445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:34:15 compute-0 podman[265197]: 2025-10-01 13:34:15.603562225 +0000 UTC m=+0.144228260 container init 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:34:15 compute-0 podman[265197]: 2025-10-01 13:34:15.611199349 +0000 UTC m=+0.151865344 container start 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 01 13:34:15 compute-0 clever_bell[265213]: 167 167
Oct 01 13:34:15 compute-0 systemd[1]: libpod-47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac.scope: Deactivated successfully.
Oct 01 13:34:15 compute-0 podman[265197]: 2025-10-01 13:34:15.635001246 +0000 UTC m=+0.175667321 container attach 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:34:15 compute-0 podman[265197]: 2025-10-01 13:34:15.635531283 +0000 UTC m=+0.176197298 container died 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c861fcd4030294f23861f972215bf0f55e641010d1faaa0ec9f26615df3376f3-merged.mount: Deactivated successfully.
Oct 01 13:34:15 compute-0 podman[265197]: 2025-10-01 13:34:15.776682016 +0000 UTC m=+0.317348001 container remove 47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:34:15 compute-0 systemd[1]: libpod-conmon-47557da4af93dc248e9d6e8b38121fc2b62a545c4076cb054ac005f6d89f43ac.scope: Deactivated successfully.
Oct 01 13:34:16 compute-0 podman[265237]: 2025-10-01 13:34:16.010596939 +0000 UTC m=+0.093348612 container create 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:34:16 compute-0 podman[265237]: 2025-10-01 13:34:15.946930923 +0000 UTC m=+0.029682596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:34:16 compute-0 systemd[1]: Started libpod-conmon-5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa.scope.
Oct 01 13:34:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:34:16 compute-0 podman[265237]: 2025-10-01 13:34:16.253253831 +0000 UTC m=+0.336005534 container init 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:34:16 compute-0 podman[265237]: 2025-10-01 13:34:16.26578114 +0000 UTC m=+0.348532823 container start 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:34:16 compute-0 podman[265237]: 2025-10-01 13:34:16.38297592 +0000 UTC m=+0.465727673 container attach 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:34:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:17 compute-0 sweet_bohr[265254]: {
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "osd_id": 0,
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "type": "bluestore"
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:     },
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "osd_id": 2,
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "type": "bluestore"
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:     },
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "osd_id": 1,
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:         "type": "bluestore"
Oct 01 13:34:17 compute-0 sweet_bohr[265254]:     }
Oct 01 13:34:17 compute-0 sweet_bohr[265254]: }
Oct 01 13:34:17 compute-0 systemd[1]: libpod-5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa.scope: Deactivated successfully.
Oct 01 13:34:17 compute-0 systemd[1]: libpod-5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa.scope: Consumed 1.090s CPU time.
Oct 01 13:34:17 compute-0 podman[265237]: 2025-10-01 13:34:17.349871531 +0000 UTC m=+1.432623184 container died 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:34:17 compute-0 ceph-mon[74802]: pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-af515715e777b927dac57c83f10549b2413e71ba1e88f8024712fc021aaa090b-merged.mount: Deactivated successfully.
Oct 01 13:34:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:34:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:34:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:34:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:34:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:34:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:34:18 compute-0 podman[265237]: 2025-10-01 13:34:18.12351859 +0000 UTC m=+2.206270223 container remove 5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bohr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:34:18 compute-0 sudo[265133]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:34:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:34:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:34:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:34:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fdd626ed-1066-47fa-9877-ab38ff872ac2 does not exist
Oct 01 13:34:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5bda3b85-5fa6-4e19-ae5a-261a0078cb66 does not exist
Oct 01 13:34:18 compute-0 systemd[1]: libpod-conmon-5bb50e35fea28c1fea874c12320de0a507b63a9ffeda58921012a32ecdb496aa.scope: Deactivated successfully.
Oct 01 13:34:18 compute-0 sudo[265300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:34:18 compute-0 sudo[265300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:18 compute-0 sudo[265300]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:18 compute-0 sudo[265325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:34:18 compute-0 sudo[265325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:34:18 compute-0 sudo[265325]: pam_unix(sudo:session): session closed for user root
Oct 01 13:34:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:34:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:34:19 compute-0 ceph-mon[74802]: pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:21 compute-0 ceph-mon[74802]: pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:23 compute-0 ceph-mon[74802]: pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:25 compute-0 ceph-mon[74802]: pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:27 compute-0 ceph-mon[74802]: pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:28 compute-0 sshd-session[265351]: Invalid user seekcy from 27.254.137.144 port 59474
Oct 01 13:34:28 compute-0 sshd-session[265351]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:34:28 compute-0 sshd-session[265351]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:34:29 compute-0 ceph-mon[74802]: pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:30 compute-0 sshd-session[265351]: Failed password for invalid user seekcy from 27.254.137.144 port 59474 ssh2
Oct 01 13:34:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:31 compute-0 sshd-session[265351]: Received disconnect from 27.254.137.144 port 59474:11: Bye Bye [preauth]
Oct 01 13:34:31 compute-0 sshd-session[265351]: Disconnected from invalid user seekcy 27.254.137.144 port 59474 [preauth]
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.772613) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325671772660, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 250, "total_data_size": 3484086, "memory_usage": 3539688, "flush_reason": "Manual Compaction"}
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325671877387, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1975939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16427, "largest_seqno": 18470, "table_properties": {"data_size": 1969425, "index_size": 3396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16564, "raw_average_key_size": 20, "raw_value_size": 1954910, "raw_average_value_size": 2389, "num_data_blocks": 157, "num_entries": 818, "num_filter_entries": 818, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325439, "oldest_key_time": 1759325439, "file_creation_time": 1759325671, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 104847 microseconds, and 6579 cpu microseconds.
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.877457) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1975939 bytes OK
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.877491) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.898716) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.898789) EVENT_LOG_v1 {"time_micros": 1759325671898778, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.898820) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3475504, prev total WAL file size 3475504, number of live WAL files 2.
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.900632) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1929KB)], [38(7724KB)]
Oct 01 13:34:31 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325671900703, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9886269, "oldest_snapshot_seqno": -1}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4450 keys, 8015062 bytes, temperature: kUnknown
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672422319, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8015062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7984241, "index_size": 18615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 107281, "raw_average_key_size": 24, "raw_value_size": 7902809, "raw_average_value_size": 1775, "num_data_blocks": 792, "num_entries": 4450, "num_filter_entries": 4450, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325671, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:34:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:32 compute-0 ceph-mon[74802]: pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.422833) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8015062 bytes
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.486873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 19.0 rd, 15.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.5 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(9.1) write-amplify(4.1) OK, records in: 4853, records dropped: 403 output_compression: NoCompression
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.486934) EVENT_LOG_v1 {"time_micros": 1759325672486910, "job": 18, "event": "compaction_finished", "compaction_time_micros": 521375, "compaction_time_cpu_micros": 37287, "output_level": 6, "num_output_files": 1, "total_output_size": 8015062, "num_input_records": 4853, "num_output_records": 4450, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672487776, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672489716, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:31.900498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.489883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.721813) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672721922, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 263, "num_deletes": 251, "total_data_size": 14510, "memory_usage": 19528, "flush_reason": "Manual Compaction"}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672737560, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 14466, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18471, "largest_seqno": 18733, "table_properties": {"data_size": 12632, "index_size": 67, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4730, "raw_average_key_size": 18, "raw_value_size": 9151, "raw_average_value_size": 35, "num_data_blocks": 3, "num_entries": 260, "num_filter_entries": 260, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325672, "oldest_key_time": 1759325672, "file_creation_time": 1759325672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 15800 microseconds, and 1776 cpu microseconds.
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.737630) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 14466 bytes OK
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.737657) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741068) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741094) EVENT_LOG_v1 {"time_micros": 1759325672741085, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741122) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 12466, prev total WAL file size 12466, number of live WAL files 2.
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741663) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(14KB)], [41(7827KB)]
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672741724, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8029528, "oldest_snapshot_seqno": -1}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4203 keys, 6265676 bytes, temperature: kUnknown
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672841638, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6265676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6238181, "index_size": 15866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 102844, "raw_average_key_size": 24, "raw_value_size": 6162736, "raw_average_value_size": 1466, "num_data_blocks": 667, "num_entries": 4203, "num_filter_entries": 4203, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.842046) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6265676 bytes
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.848804) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.3 rd, 62.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 7.6 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(988.2) write-amplify(433.1) OK, records in: 4710, records dropped: 507 output_compression: NoCompression
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.848849) EVENT_LOG_v1 {"time_micros": 1759325672848829, "job": 20, "event": "compaction_finished", "compaction_time_micros": 100036, "compaction_time_cpu_micros": 28769, "output_level": 6, "num_output_files": 1, "total_output_size": 6265676, "num_input_records": 4710, "num_output_records": 4203, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672849055, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325672854076, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.741531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:34:32.854227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:34:33 compute-0 ceph-mon[74802]: pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:36 compute-0 ceph-mon[74802]: pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:37 compute-0 ceph-mon[74802]: pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:37 compute-0 podman[265355]: 2025-10-01 13:34:37.560563994 +0000 UTC m=+0.094056434 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:34:37 compute-0 podman[265356]: 2025-10-01 13:34:37.573170166 +0000 UTC m=+0.103676131 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 01 13:34:37 compute-0 podman[265353]: 2025-10-01 13:34:37.593924486 +0000 UTC m=+0.137098585 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 01 13:34:37 compute-0 podman[265354]: 2025-10-01 13:34:37.595772795 +0000 UTC m=+0.134732859 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 13:34:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:39 compute-0 ceph-mon[74802]: pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:41 compute-0 ceph-mon[74802]: pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:41 compute-0 sshd-session[265433]: Invalid user oracle from 200.7.101.139 port 33850
Oct 01 13:34:41 compute-0 sshd-session[265433]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:34:41 compute-0 sshd-session[265433]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139
Oct 01 13:34:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:43 compute-0 ceph-mon[74802]: pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:43 compute-0 sshd-session[265433]: Failed password for invalid user oracle from 200.7.101.139 port 33850 ssh2
Oct 01 13:34:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:45 compute-0 sshd-session[265433]: Received disconnect from 200.7.101.139 port 33850:11: Bye Bye [preauth]
Oct 01 13:34:45 compute-0 sshd-session[265433]: Disconnected from invalid user oracle 200.7.101.139 port 33850 [preauth]
Oct 01 13:34:45 compute-0 ceph-mon[74802]: pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:34:47
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', '.rgw.root', '.mgr', 'default.rgw.log', 'backups', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:34:47 compute-0 ceph-mon[74802]: pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:34:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:34:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:49 compute-0 ceph-mon[74802]: pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:51 compute-0 ceph-mon[74802]: pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:53 compute-0 nova_compute[260022]: 2025-10-01 13:34:53.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:53 compute-0 nova_compute[260022]: 2025-10-01 13:34:53.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 13:34:53 compute-0 nova_compute[260022]: 2025-10-01 13:34:53.368 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 13:34:53 compute-0 nova_compute[260022]: 2025-10-01 13:34:53.369 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:53 compute-0 nova_compute[260022]: 2025-10-01 13:34:53.370 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 13:34:53 compute-0 nova_compute[260022]: 2025-10-01 13:34:53.383 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:53 compute-0 ceph-mon[74802]: pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:34:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770122084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:34:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:34:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770122084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:34:55 compute-0 nova_compute[260022]: 2025-10-01 13:34:55.392 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:55 compute-0 nova_compute[260022]: 2025-10-01 13:34:55.392 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:56 compute-0 ceph-mon[74802]: pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2770122084' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:34:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2770122084' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:34:56 compute-0 nova_compute[260022]: 2025-10-01 13:34:56.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:34:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:34:57 compute-0 nova_compute[260022]: 2025-10-01 13:34:57.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:57 compute-0 nova_compute[260022]: 2025-10-01 13:34:57.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:57 compute-0 nova_compute[260022]: 2025-10-01 13:34:57.344 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:34:57 compute-0 nova_compute[260022]: 2025-10-01 13:34:57.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:34:57 compute-0 nova_compute[260022]: 2025-10-01 13:34:57.359 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:34:57 compute-0 nova_compute[260022]: 2025-10-01 13:34:57.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:57 compute-0 nova_compute[260022]: 2025-10-01 13:34:57.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:34:57 compute-0 ceph-mon[74802]: pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:34:59 compute-0 nova_compute[260022]: 2025-10-01 13:34:59.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:59 compute-0 nova_compute[260022]: 2025-10-01 13:34:59.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:59 compute-0 nova_compute[260022]: 2025-10-01 13:34:59.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:34:59 compute-0 nova_compute[260022]: 2025-10-01 13:34:59.379 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:34:59 compute-0 nova_compute[260022]: 2025-10-01 13:34:59.379 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:34:59 compute-0 nova_compute[260022]: 2025-10-01 13:34:59.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:34:59 compute-0 nova_compute[260022]: 2025-10-01 13:34:59.380 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:34:59 compute-0 nova_compute[260022]: 2025-10-01 13:34:59.380 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:34:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:34:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84078365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:34:59 compute-0 nova_compute[260022]: 2025-10-01 13:34:59.898 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:34:59 compute-0 ceph-mon[74802]: pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:00 compute-0 nova_compute[260022]: 2025-10-01 13:35:00.133 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:35:00 compute-0 nova_compute[260022]: 2025-10-01 13:35:00.135 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:35:00 compute-0 nova_compute[260022]: 2025-10-01 13:35:00.135 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:35:00 compute-0 nova_compute[260022]: 2025-10-01 13:35:00.135 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:35:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:00 compute-0 nova_compute[260022]: 2025-10-01 13:35:00.857 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:35:00 compute-0 nova_compute[260022]: 2025-10-01 13:35:00.857 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:35:00 compute-0 nova_compute[260022]: 2025-10-01 13:35:00.940 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 13:35:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/84078365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.041 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.041 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.063 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.085 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.104 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:35:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:35:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3210871321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.538 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.546 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.565 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.568 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:35:01 compute-0 nova_compute[260022]: 2025-10-01 13:35:01.568 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:35:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:02 compute-0 ceph-mon[74802]: pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:02 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3210871321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:35:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:03 compute-0 ceph-mon[74802]: pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:05 compute-0 ceph-mon[74802]: pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:07 compute-0 ceph-mon[74802]: pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:08 compute-0 sshd-session[265481]: Invalid user wg from 80.253.31.232 port 60290
Oct 01 13:35:08 compute-0 sshd-session[265481]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:35:08 compute-0 sshd-session[265481]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:35:08 compute-0 podman[265484]: 2025-10-01 13:35:08.123888007 +0000 UTC m=+0.069213334 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Oct 01 13:35:08 compute-0 podman[265491]: 2025-10-01 13:35:08.126294373 +0000 UTC m=+0.065496455 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct 01 13:35:08 compute-0 podman[265483]: 2025-10-01 13:35:08.134107142 +0000 UTC m=+0.097811174 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 01 13:35:08 compute-0 podman[265489]: 2025-10-01 13:35:08.158059864 +0000 UTC m=+0.100735847 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:35:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:09 compute-0 sshd-session[265481]: Failed password for invalid user wg from 80.253.31.232 port 60290 ssh2
Oct 01 13:35:09 compute-0 ceph-mon[74802]: pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:10 compute-0 sshd-session[265481]: Received disconnect from 80.253.31.232 port 60290:11: Bye Bye [preauth]
Oct 01 13:35:10 compute-0 sshd-session[265481]: Disconnected from invalid user wg 80.253.31.232 port 60290 [preauth]
Oct 01 13:35:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:11 compute-0 ceph-mon[74802]: pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:35:12.301 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:35:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:35:12.301 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:35:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:35:12.302 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:35:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:13 compute-0 ceph-mon[74802]: pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:15 compute-0 ceph-mon[74802]: pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:17 compute-0 ceph-mon[74802]: pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:35:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:35:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:35:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:35:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:35:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:35:18 compute-0 sudo[265556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:35:18 compute-0 sudo[265556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:18 compute-0 sudo[265556]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:18 compute-0 sudo[265581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:35:18 compute-0 sudo[265581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:18 compute-0 sudo[265581]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:18 compute-0 sudo[265606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:35:18 compute-0 sudo[265606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:18 compute-0 sudo[265606]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:18 compute-0 sudo[265631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:35:18 compute-0 sudo[265631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:19 compute-0 sudo[265631]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:35:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:35:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:35:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:35:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:35:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:35:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 59c67d39-745e-40a6-8d0d-6438665aaf60 does not exist
Oct 01 13:35:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3d312aae-ddbf-46fd-8933-2749ecd87eed does not exist
Oct 01 13:35:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 07f94bf8-0484-4469-9014-fd03aa6d21c8 does not exist
Oct 01 13:35:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:35:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:35:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:35:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:35:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:35:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:35:19 compute-0 sudo[265686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:35:19 compute-0 sudo[265686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:19 compute-0 sudo[265686]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:19 compute-0 sudo[265711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:35:19 compute-0 sudo[265711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:19 compute-0 sudo[265711]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:19 compute-0 sudo[265736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:35:19 compute-0 sudo[265736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:19 compute-0 sudo[265736]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:19 compute-0 sudo[265761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:35:19 compute-0 sudo[265761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:19 compute-0 ceph-mon[74802]: pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:35:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:35:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:35:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:35:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:35:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:35:20 compute-0 podman[265826]: 2025-10-01 13:35:20.114933438 +0000 UTC m=+0.095517790 container create 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:35:20 compute-0 podman[265826]: 2025-10-01 13:35:20.057020546 +0000 UTC m=+0.037604908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:35:20 compute-0 systemd[1]: Started libpod-conmon-9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b.scope.
Oct 01 13:35:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:35:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:20 compute-0 podman[265826]: 2025-10-01 13:35:20.626110207 +0000 UTC m=+0.606694619 container init 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:35:20 compute-0 podman[265826]: 2025-10-01 13:35:20.636720814 +0000 UTC m=+0.617305176 container start 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:35:20 compute-0 eloquent_nash[265842]: 167 167
Oct 01 13:35:20 compute-0 systemd[1]: libpod-9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b.scope: Deactivated successfully.
Oct 01 13:35:20 compute-0 podman[265826]: 2025-10-01 13:35:20.738644107 +0000 UTC m=+0.719228429 container attach 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:35:20 compute-0 podman[265826]: 2025-10-01 13:35:20.739187155 +0000 UTC m=+0.719771497 container died 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a48255cfc7921e33424bf7dfa4892b8be73e0a8cf7cc2cd7519775b7c583d420-merged.mount: Deactivated successfully.
Oct 01 13:35:21 compute-0 podman[265826]: 2025-10-01 13:35:21.24024831 +0000 UTC m=+1.220832672 container remove 9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:35:21 compute-0 systemd[1]: libpod-conmon-9c9de0f73bf102262907c41ca2a29708adaab15046c9a6011946ae11ae25a61b.scope: Deactivated successfully.
Oct 01 13:35:21 compute-0 podman[265865]: 2025-10-01 13:35:21.400212711 +0000 UTC m=+0.024905343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:35:21 compute-0 podman[265865]: 2025-10-01 13:35:21.595705732 +0000 UTC m=+0.220398324 container create 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:35:21 compute-0 systemd[1]: Started libpod-conmon-191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b.scope.
Oct 01 13:35:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:21 compute-0 podman[265865]: 2025-10-01 13:35:21.96237131 +0000 UTC m=+0.587063952 container init 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:35:21 compute-0 podman[265865]: 2025-10-01 13:35:21.972841113 +0000 UTC m=+0.597533705 container start 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:35:21 compute-0 ceph-mon[74802]: pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:22 compute-0 podman[265865]: 2025-10-01 13:35:22.063917733 +0000 UTC m=+0.688610365 container attach 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:35:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:23 compute-0 sharp_mcclintock[265881]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:35:23 compute-0 sharp_mcclintock[265881]: --> relative data size: 1.0
Oct 01 13:35:23 compute-0 sharp_mcclintock[265881]: --> All data devices are unavailable
Oct 01 13:35:23 compute-0 systemd[1]: libpod-191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b.scope: Deactivated successfully.
Oct 01 13:35:23 compute-0 systemd[1]: libpod-191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b.scope: Consumed 1.077s CPU time.
Oct 01 13:35:23 compute-0 podman[265865]: 2025-10-01 13:35:23.09890946 +0000 UTC m=+1.723602052 container died 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 01 13:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-819eff99dfadd7ceba38e6cf76382d89fd964ca84d61b65a7b5b97f511a8f25f-merged.mount: Deactivated successfully.
Oct 01 13:35:23 compute-0 podman[265865]: 2025-10-01 13:35:23.303189991 +0000 UTC m=+1.927882573 container remove 191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mcclintock, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:35:23 compute-0 systemd[1]: libpod-conmon-191412791f1a77f5c1aa49c481212c227e4028e75a54558ddcbf8f5fe34bac8b.scope: Deactivated successfully.
Oct 01 13:35:23 compute-0 sudo[265761]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:23 compute-0 sudo[265924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:35:23 compute-0 sudo[265924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:23 compute-0 sudo[265924]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:23 compute-0 sudo[265949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:35:23 compute-0 sudo[265949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:23 compute-0 sudo[265949]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:23 compute-0 sudo[265974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:35:23 compute-0 sudo[265974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:23 compute-0 sudo[265974]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:23 compute-0 sudo[265999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:35:23 compute-0 sudo[265999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:24 compute-0 ceph-mon[74802]: pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:24 compute-0 podman[266066]: 2025-10-01 13:35:24.089215786 +0000 UTC m=+0.100421248 container create c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:35:24 compute-0 podman[266066]: 2025-10-01 13:35:24.025917061 +0000 UTC m=+0.037122573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:35:24 compute-0 systemd[1]: Started libpod-conmon-c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508.scope.
Oct 01 13:35:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:35:24 compute-0 podman[266066]: 2025-10-01 13:35:24.273571753 +0000 UTC m=+0.284777215 container init c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:35:24 compute-0 podman[266066]: 2025-10-01 13:35:24.283844637 +0000 UTC m=+0.295050099 container start c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 13:35:24 compute-0 cranky_meitner[266084]: 167 167
Oct 01 13:35:24 compute-0 systemd[1]: libpod-c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508.scope: Deactivated successfully.
Oct 01 13:35:24 compute-0 podman[266066]: 2025-10-01 13:35:24.301225655 +0000 UTC m=+0.312431127 container attach c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:35:24 compute-0 podman[266066]: 2025-10-01 13:35:24.301636818 +0000 UTC m=+0.312842270 container died c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:35:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4019de7d2338c3ad157c9e10035b2d7cbddf15625408a63cdabbbb8215c6df55-merged.mount: Deactivated successfully.
Oct 01 13:35:24 compute-0 podman[266066]: 2025-10-01 13:35:24.80279958 +0000 UTC m=+0.814005002 container remove c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:35:24 compute-0 systemd[1]: libpod-conmon-c4009d8379b10bff5db961bb4b1654a0062a414d7b1e9296f907f671436e2508.scope: Deactivated successfully.
Oct 01 13:35:25 compute-0 podman[266110]: 2025-10-01 13:35:25.033698795 +0000 UTC m=+0.070258338 container create 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:35:25 compute-0 podman[266110]: 2025-10-01 13:35:24.996442159 +0000 UTC m=+0.033001702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:35:25 compute-0 systemd[1]: Started libpod-conmon-3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9.scope.
Oct 01 13:35:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:25 compute-0 ceph-mon[74802]: pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:25 compute-0 podman[266110]: 2025-10-01 13:35:25.259109487 +0000 UTC m=+0.295669050 container init 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:35:25 compute-0 podman[266110]: 2025-10-01 13:35:25.271876199 +0000 UTC m=+0.308435762 container start 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:35:25 compute-0 podman[266110]: 2025-10-01 13:35:25.282780133 +0000 UTC m=+0.319339696 container attach 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 13:35:26 compute-0 boring_tu[266126]: {
Oct 01 13:35:26 compute-0 boring_tu[266126]:     "0": [
Oct 01 13:35:26 compute-0 boring_tu[266126]:         {
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "devices": [
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "/dev/loop3"
Oct 01 13:35:26 compute-0 boring_tu[266126]:             ],
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_name": "ceph_lv0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_size": "21470642176",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "name": "ceph_lv0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "tags": {
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.cluster_name": "ceph",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.crush_device_class": "",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.encrypted": "0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.osd_id": "0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.type": "block",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.vdo": "0"
Oct 01 13:35:26 compute-0 boring_tu[266126]:             },
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "type": "block",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "vg_name": "ceph_vg0"
Oct 01 13:35:26 compute-0 boring_tu[266126]:         }
Oct 01 13:35:26 compute-0 boring_tu[266126]:     ],
Oct 01 13:35:26 compute-0 boring_tu[266126]:     "1": [
Oct 01 13:35:26 compute-0 boring_tu[266126]:         {
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "devices": [
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "/dev/loop4"
Oct 01 13:35:26 compute-0 boring_tu[266126]:             ],
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_name": "ceph_lv1",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_size": "21470642176",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "name": "ceph_lv1",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "tags": {
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.cluster_name": "ceph",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.crush_device_class": "",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.encrypted": "0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.osd_id": "1",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.type": "block",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.vdo": "0"
Oct 01 13:35:26 compute-0 boring_tu[266126]:             },
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "type": "block",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "vg_name": "ceph_vg1"
Oct 01 13:35:26 compute-0 boring_tu[266126]:         }
Oct 01 13:35:26 compute-0 boring_tu[266126]:     ],
Oct 01 13:35:26 compute-0 boring_tu[266126]:     "2": [
Oct 01 13:35:26 compute-0 boring_tu[266126]:         {
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "devices": [
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "/dev/loop5"
Oct 01 13:35:26 compute-0 boring_tu[266126]:             ],
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_name": "ceph_lv2",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_size": "21470642176",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "name": "ceph_lv2",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "tags": {
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.cluster_name": "ceph",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.crush_device_class": "",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.encrypted": "0",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.osd_id": "2",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.type": "block",
Oct 01 13:35:26 compute-0 boring_tu[266126]:                 "ceph.vdo": "0"
Oct 01 13:35:26 compute-0 boring_tu[266126]:             },
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "type": "block",
Oct 01 13:35:26 compute-0 boring_tu[266126]:             "vg_name": "ceph_vg2"
Oct 01 13:35:26 compute-0 boring_tu[266126]:         }
Oct 01 13:35:26 compute-0 boring_tu[266126]:     ]
Oct 01 13:35:26 compute-0 boring_tu[266126]: }
Oct 01 13:35:26 compute-0 systemd[1]: libpod-3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9.scope: Deactivated successfully.
Oct 01 13:35:26 compute-0 conmon[266126]: conmon 3febc8671aa504144c6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9.scope/container/memory.events
Oct 01 13:35:26 compute-0 podman[266110]: 2025-10-01 13:35:26.109999992 +0000 UTC m=+1.146559555 container died 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:35:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6db679ed6f57944373b0f142bc6f6ce19e7c529bc9e0a59e3b0848cb1cb195-merged.mount: Deactivated successfully.
Oct 01 13:35:26 compute-0 sshd-session[265456]: error: kex_exchange_identification: read: Connection reset by peer
Oct 01 13:35:26 compute-0 sshd-session[265456]: Connection reset by 45.140.17.97 port 48586
Oct 01 13:35:26 compute-0 podman[266110]: 2025-10-01 13:35:26.723338913 +0000 UTC m=+1.759898476 container remove 3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_tu, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:35:26 compute-0 sudo[265999]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:26 compute-0 systemd[1]: libpod-conmon-3febc8671aa504144c6bcfa01337a537f966fac72160192a3f2d60cf8724a1d9.scope: Deactivated successfully.
Oct 01 13:35:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:26 compute-0 sudo[266146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:35:26 compute-0 sudo[266146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:26 compute-0 sudo[266146]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:26 compute-0 sudo[266171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:35:26 compute-0 sudo[266171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:26 compute-0 sudo[266171]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:27 compute-0 sudo[266196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:35:27 compute-0 sudo[266196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:27 compute-0 sudo[266196]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:27 compute-0 sudo[266221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:35:27 compute-0 sudo[266221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:27 compute-0 podman[266286]: 2025-10-01 13:35:27.506583794 +0000 UTC m=+0.082562276 container create 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:35:27 compute-0 podman[266286]: 2025-10-01 13:35:27.460623653 +0000 UTC m=+0.036602135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:35:27 compute-0 systemd[1]: Started libpod-conmon-73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae.scope.
Oct 01 13:35:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:35:27 compute-0 ceph-mon[74802]: pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:27 compute-0 podman[266286]: 2025-10-01 13:35:27.863079852 +0000 UTC m=+0.439058354 container init 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:35:27 compute-0 podman[266286]: 2025-10-01 13:35:27.875918646 +0000 UTC m=+0.451897128 container start 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct 01 13:35:27 compute-0 bold_jackson[266302]: 167 167
Oct 01 13:35:27 compute-0 systemd[1]: libpod-73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae.scope: Deactivated successfully.
Oct 01 13:35:28 compute-0 podman[266286]: 2025-10-01 13:35:28.031945369 +0000 UTC m=+0.607923921 container attach 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:35:28 compute-0 podman[266286]: 2025-10-01 13:35:28.032489246 +0000 UTC m=+0.608467738 container died 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 13:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-29790dde0353d70103c2625bedddc13f334e6a6d153c25a5a6652a15fbbaa8cf-merged.mount: Deactivated successfully.
Oct 01 13:35:28 compute-0 podman[266286]: 2025-10-01 13:35:28.423970067 +0000 UTC m=+0.999948529 container remove 73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:35:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:28 compute-0 systemd[1]: libpod-conmon-73ca0b524f1d7b1ee32ef051ea4e9fd516d66495b957e26b4b0547706f2913ae.scope: Deactivated successfully.
Oct 01 13:35:28 compute-0 podman[266329]: 2025-10-01 13:35:28.662976288 +0000 UTC m=+0.112309344 container create 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 13:35:28 compute-0 podman[266329]: 2025-10-01 13:35:28.578201333 +0000 UTC m=+0.027534489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:35:28 compute-0 systemd[1]: Started libpod-conmon-1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335.scope.
Oct 01 13:35:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:35:28 compute-0 podman[266329]: 2025-10-01 13:35:28.906118149 +0000 UTC m=+0.355451285 container init 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:35:28 compute-0 podman[266329]: 2025-10-01 13:35:28.913650286 +0000 UTC m=+0.362983372 container start 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:35:29 compute-0 podman[266329]: 2025-10-01 13:35:29.004329238 +0000 UTC m=+0.453662324 container attach 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:35:29 compute-0 ceph-mon[74802]: pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:30 compute-0 vibrant_napier[266348]: {
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "osd_id": 0,
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "type": "bluestore"
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:     },
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "osd_id": 2,
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "type": "bluestore"
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:     },
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "osd_id": 1,
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:         "type": "bluestore"
Oct 01 13:35:30 compute-0 vibrant_napier[266348]:     }
Oct 01 13:35:30 compute-0 vibrant_napier[266348]: }
Oct 01 13:35:30 compute-0 systemd[1]: libpod-1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335.scope: Deactivated successfully.
Oct 01 13:35:30 compute-0 podman[266329]: 2025-10-01 13:35:30.12421571 +0000 UTC m=+1.573548766 container died 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:35:30 compute-0 systemd[1]: libpod-1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335.scope: Consumed 1.219s CPU time.
Oct 01 13:35:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-664d89bdba65ef095c17334ce621f238b973e87aed5e077ad595778650cb121e-merged.mount: Deactivated successfully.
Oct 01 13:35:30 compute-0 podman[266329]: 2025-10-01 13:35:30.225534597 +0000 UTC m=+1.674867673 container remove 1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:35:30 compute-0 systemd[1]: libpod-conmon-1ead294e0b6959e85c2fd40653447b0afcbca5e64336ec4db0a8a8bff33b2335.scope: Deactivated successfully.
Oct 01 13:35:30 compute-0 sudo[266221]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:35:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:35:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:35:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:35:30 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7de5d384-2cca-4f44-bc56-4b54e793cf6b does not exist
Oct 01 13:35:30 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 527c486e-b027-45c1-8c49-35d93bee7af1 does not exist
Oct 01 13:35:30 compute-0 sudo[266397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:35:30 compute-0 sudo[266397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:30 compute-0 sudo[266397]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:30 compute-0 sudo[266422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:35:30 compute-0 sudo[266422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:35:30 compute-0 sudo[266422]: pam_unix(sudo:session): session closed for user root
Oct 01 13:35:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:35:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:35:31 compute-0 ceph-mon[74802]: pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:34 compute-0 ceph-mon[74802]: pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:35 compute-0 ceph-mon[74802]: pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:38 compute-0 ceph-mon[74802]: pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:38 compute-0 podman[266450]: 2025-10-01 13:35:38.549173806 +0000 UTC m=+0.083962730 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923)
Oct 01 13:35:38 compute-0 podman[266449]: 2025-10-01 13:35:38.578295986 +0000 UTC m=+0.114094811 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 13:35:38 compute-0 podman[266448]: 2025-10-01 13:35:38.57906823 +0000 UTC m=+0.115668870 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:35:38 compute-0 podman[266447]: 2025-10-01 13:35:38.590581784 +0000 UTC m=+0.128559597 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true)
Oct 01 13:35:39 compute-0 ceph-mon[74802]: pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:40 compute-0 unix_chkpwd[266529]: password check failed for user (root)
Oct 01 13:35:40 compute-0 sshd-session[266527]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144  user=root
Oct 01 13:35:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:41 compute-0 ceph-mon[74802]: pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:43 compute-0 sshd-session[266527]: Failed password for root from 27.254.137.144 port 55142 ssh2
Oct 01 13:35:44 compute-0 ceph-mon[74802]: pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:44 compute-0 sshd-session[266527]: Received disconnect from 27.254.137.144 port 55142:11: Bye Bye [preauth]
Oct 01 13:35:44 compute-0 sshd-session[266527]: Disconnected from authenticating user root 27.254.137.144 port 55142 [preauth]
Oct 01 13:35:45 compute-0 ceph-mon[74802]: pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:47 compute-0 sshd-session[266530]: Invalid user usuario from 185.156.73.233 port 50872
Oct 01 13:35:47 compute-0 sshd-session[266530]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:35:47 compute-0 sshd-session[266530]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.156.73.233
Oct 01 13:35:47 compute-0 ceph-mon[74802]: pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:35:47
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms', 'images', '.rgw.root']
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:35:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:35:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:49 compute-0 sshd-session[266530]: Failed password for invalid user usuario from 185.156.73.233 port 50872 ssh2
Oct 01 13:35:49 compute-0 ceph-mon[74802]: pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:50 compute-0 sshd-session[266530]: Connection closed by invalid user usuario 185.156.73.233 port 50872 [preauth]
Oct 01 13:35:51 compute-0 ceph-mon[74802]: pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:53 compute-0 ceph-mon[74802]: pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:35:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1070204012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:35:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:35:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1070204012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:35:55 compute-0 ceph-mon[74802]: pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:35:57 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1070204012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:35:57 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1070204012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:35:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:35:58 compute-0 ceph-mon[74802]: pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.568 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.569 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.569 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.569 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.588 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.588 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.589 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.589 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.590 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:35:58 compute-0 nova_compute[260022]: 2025-10-01 13:35:58.590 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:35:59 compute-0 nova_compute[260022]: 2025-10-01 13:35:59.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:35:59 compute-0 nova_compute[260022]: 2025-10-01 13:35:59.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:35:59 compute-0 nova_compute[260022]: 2025-10-01 13:35:59.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:35:59 compute-0 nova_compute[260022]: 2025-10-01 13:35:59.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:35:59 compute-0 nova_compute[260022]: 2025-10-01 13:35:59.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:35:59 compute-0 nova_compute[260022]: 2025-10-01 13:35:59.377 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:35:59 compute-0 ceph-mon[74802]: pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:35:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:35:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952629815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:35:59 compute-0 nova_compute[260022]: 2025-10-01 13:35:59.915 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.194 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.196 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5188MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.197 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.197 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.251 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.252 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.269 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:36:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:00 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2952629815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:36:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:36:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025770361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.908 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.917 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.933 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.935 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:36:00 compute-0 nova_compute[260022]: 2025-10-01 13:36:00.936 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:36:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:01 compute-0 ceph-mon[74802]: pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1025770361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:36:01 compute-0 nova_compute[260022]: 2025-10-01 13:36:01.938 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:01 compute-0 nova_compute[260022]: 2025-10-01 13:36:01.938 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:02 compute-0 nova_compute[260022]: 2025-10-01 13:36:02.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:03 compute-0 unix_chkpwd[266578]: password check failed for user (root)
Oct 01 13:36:03 compute-0 sshd-session[266576]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139  user=root
Oct 01 13:36:03 compute-0 ceph-mon[74802]: pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:05 compute-0 sshd-session[266576]: Failed password for root from 200.7.101.139 port 53090 ssh2
Oct 01 13:36:05 compute-0 ceph-mon[74802]: pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:07 compute-0 sshd-session[266576]: Received disconnect from 200.7.101.139 port 53090:11: Bye Bye [preauth]
Oct 01 13:36:07 compute-0 sshd-session[266576]: Disconnected from authenticating user root 200.7.101.139 port 53090 [preauth]
Oct 01 13:36:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:08 compute-0 ceph-mon[74802]: pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.243299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325768243344, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 992, "num_deletes": 262, "total_data_size": 1407299, "memory_usage": 1437072, "flush_reason": "Manual Compaction"}
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct 01 13:36:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325768618485, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1394562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18734, "largest_seqno": 19725, "table_properties": {"data_size": 1389678, "index_size": 2408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10069, "raw_average_key_size": 18, "raw_value_size": 1379870, "raw_average_value_size": 2541, "num_data_blocks": 110, "num_entries": 543, "num_filter_entries": 543, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325672, "oldest_key_time": 1759325672, "file_creation_time": 1759325768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 375291 microseconds, and 4422 cpu microseconds.
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:36:08 compute-0 sshd-session[266579]: Invalid user seekcy from 80.253.31.232 port 51298
Oct 01 13:36:08 compute-0 sshd-session[266579]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:36:08 compute-0 sshd-session[266579]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.618586) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1394562 bytes OK
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.618612) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.836246) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.836307) EVENT_LOG_v1 {"time_micros": 1759325768836293, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.836337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1402541, prev total WAL file size 1402541, number of live WAL files 2.
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.837136) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353039' seq:0, type:0; will stop at (end)
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1361KB)], [44(6118KB)]
Oct 01 13:36:08 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325768837211, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7660238, "oldest_snapshot_seqno": -1}
Oct 01 13:36:08 compute-0 podman[266583]: 2025-10-01 13:36:08.849908086 +0000 UTC m=+0.084453376 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 13:36:08 compute-0 podman[266584]: 2025-10-01 13:36:08.856825444 +0000 UTC m=+0.081237694 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923)
Oct 01 13:36:08 compute-0 podman[266582]: 2025-10-01 13:36:08.862265936 +0000 UTC m=+0.099123069 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923)
Oct 01 13:36:08 compute-0 podman[266581]: 2025-10-01 13:36:08.884494167 +0000 UTC m=+0.126368688 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4210 keys, 7524904 bytes, temperature: kUnknown
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325769548244, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7524904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7495469, "index_size": 17805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104089, "raw_average_key_size": 24, "raw_value_size": 7417955, "raw_average_value_size": 1761, "num_data_blocks": 747, "num_entries": 4210, "num_filter_entries": 4210, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.548609) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7524904 bytes
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.881317) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 10.8 rd, 10.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 4746, records dropped: 536 output_compression: NoCompression
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.881377) EVENT_LOG_v1 {"time_micros": 1759325769881354, "job": 22, "event": "compaction_finished", "compaction_time_micros": 711141, "compaction_time_cpu_micros": 23147, "output_level": 6, "num_output_files": 1, "total_output_size": 7524904, "num_input_records": 4746, "num_output_records": 4210, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325769882079, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325769884413, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:08.837046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:36:09 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:36:09.884620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:36:09 compute-0 ceph-mon[74802]: pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:10 compute-0 sshd-session[266579]: Failed password for invalid user seekcy from 80.253.31.232 port 51298 ssh2
Oct 01 13:36:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:11 compute-0 sshd-session[266579]: Received disconnect from 80.253.31.232 port 51298:11: Bye Bye [preauth]
Oct 01 13:36:11 compute-0 sshd-session[266579]: Disconnected from invalid user seekcy 80.253.31.232 port 51298 [preauth]
Oct 01 13:36:11 compute-0 ceph-mon[74802]: pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:36:12.302 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:36:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:36:12.302 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:36:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:36:12.303 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:36:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:13 compute-0 ceph-mon[74802]: pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:16 compute-0 ceph-mon[74802]: pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:17 compute-0 ceph-mon[74802]: pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:36:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:36:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:36:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:36:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:36:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:36:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:19 compute-0 ceph-mon[74802]: pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:21 compute-0 ceph-mon[74802]: pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:23 compute-0 ceph-mon[74802]: pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:26 compute-0 ceph-mon[74802]: pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:27 compute-0 ceph-mon[74802]: pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:29 compute-0 ceph-mon[74802]: pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:30 compute-0 sudo[266664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:36:30 compute-0 sudo[266664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:30 compute-0 sudo[266664]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:30 compute-0 sudo[266689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:36:30 compute-0 sudo[266689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:30 compute-0 sudo[266689]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:30 compute-0 sudo[266714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:36:30 compute-0 sudo[266714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:30 compute-0 sudo[266714]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:30 compute-0 sudo[266739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:36:30 compute-0 sudo[266739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:31 compute-0 sudo[266739]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:36:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:36:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:36:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:36:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:36:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:36:31 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f47f3733-b67f-4acc-a25c-680085b564f4 does not exist
Oct 01 13:36:31 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 69b6fb71-d0da-46e0-9b2f-f878cabac4a2 does not exist
Oct 01 13:36:31 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8b8be85b-740c-4af0-8001-a76f36c0c18c does not exist
Oct 01 13:36:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:36:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:36:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:36:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:36:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:36:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:36:31 compute-0 sudo[266795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:36:31 compute-0 sudo[266795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:31 compute-0 sudo[266795]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:31 compute-0 sudo[266820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:36:31 compute-0 sudo[266820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:31 compute-0 sudo[266820]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:31 compute-0 sudo[266845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:36:31 compute-0 sudo[266845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:31 compute-0 sudo[266845]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:31 compute-0 sudo[266870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:36:31 compute-0 sudo[266870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:32 compute-0 ceph-mon[74802]: pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:36:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:36:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:36:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:36:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:36:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:36:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:32 compute-0 podman[266937]: 2025-10-01 13:36:32.368725678 +0000 UTC m=+0.042679787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:36:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:32 compute-0 podman[266937]: 2025-10-01 13:36:32.550948117 +0000 UTC m=+0.224902186 container create 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:36:32 compute-0 systemd[1]: Started libpod-conmon-667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235.scope.
Oct 01 13:36:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:36:32 compute-0 podman[266937]: 2025-10-01 13:36:32.973974643 +0000 UTC m=+0.647928762 container init 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:36:32 compute-0 podman[266937]: 2025-10-01 13:36:32.984272508 +0000 UTC m=+0.658226527 container start 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:36:32 compute-0 adoring_khorana[266953]: 167 167
Oct 01 13:36:32 compute-0 systemd[1]: libpod-667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235.scope: Deactivated successfully.
Oct 01 13:36:32 compute-0 conmon[266953]: conmon 667e9011cfa518ada29f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235.scope/container/memory.events
Oct 01 13:36:33 compute-0 podman[266937]: 2025-10-01 13:36:33.096286332 +0000 UTC m=+0.770240391 container attach 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:36:33 compute-0 podman[266937]: 2025-10-01 13:36:33.096805918 +0000 UTC m=+0.770759957 container died 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:36:33 compute-0 ceph-mon[74802]: pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-40925f290af5545a0c6d599d2271ce290ea53ddcc7b4bb0bc8e6bf585d4268fd-merged.mount: Deactivated successfully.
Oct 01 13:36:34 compute-0 podman[266937]: 2025-10-01 13:36:34.008481922 +0000 UTC m=+1.682435941 container remove 667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:36:34 compute-0 systemd[1]: libpod-conmon-667e9011cfa518ada29f9f0f23046f76d05e52b8a0ec989029a278d081c0f235.scope: Deactivated successfully.
Oct 01 13:36:34 compute-0 podman[266978]: 2025-10-01 13:36:34.298422299 +0000 UTC m=+0.126784281 container create 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:36:34 compute-0 podman[266978]: 2025-10-01 13:36:34.216142563 +0000 UTC m=+0.044504555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:36:34 compute-0 systemd[1]: Started libpod-conmon-11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b.scope.
Oct 01 13:36:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:34 compute-0 podman[266978]: 2025-10-01 13:36:34.533857728 +0000 UTC m=+0.362219690 container init 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:36:34 compute-0 podman[266978]: 2025-10-01 13:36:34.54409161 +0000 UTC m=+0.372453552 container start 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:36:34 compute-0 podman[266978]: 2025-10-01 13:36:34.58560525 +0000 UTC m=+0.413967192 container attach 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:36:35 compute-0 xenodochial_dijkstra[266995]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:36:35 compute-0 xenodochial_dijkstra[266995]: --> relative data size: 1.0
Oct 01 13:36:35 compute-0 xenodochial_dijkstra[266995]: --> All data devices are unavailable
Oct 01 13:36:35 compute-0 ceph-mon[74802]: pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:35 compute-0 systemd[1]: libpod-11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b.scope: Deactivated successfully.
Oct 01 13:36:35 compute-0 systemd[1]: libpod-11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b.scope: Consumed 1.033s CPU time.
Oct 01 13:36:35 compute-0 podman[266978]: 2025-10-01 13:36:35.630885749 +0000 UTC m=+1.459247701 container died 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0955094454050111a6b639cde659ec4e28112bda7cc939328dd89b65e81c83e-merged.mount: Deactivated successfully.
Oct 01 13:36:35 compute-0 podman[266978]: 2025-10-01 13:36:35.728497688 +0000 UTC m=+1.556859630 container remove 11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:36:35 compute-0 systemd[1]: libpod-conmon-11c7451c053035f0becce87168ee26de21f520c241e13f026499f077342abf0b.scope: Deactivated successfully.
Oct 01 13:36:35 compute-0 sudo[266870]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:35 compute-0 sudo[267036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:36:35 compute-0 sudo[267036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:35 compute-0 sudo[267036]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:35 compute-0 sudo[267061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:36:35 compute-0 sudo[267061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:35 compute-0 sudo[267061]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:36 compute-0 sudo[267086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:36:36 compute-0 sudo[267086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:36 compute-0 sudo[267086]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:36 compute-0 sudo[267111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:36:36 compute-0 sudo[267111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:36 compute-0 podman[267178]: 2025-10-01 13:36:36.498976317 +0000 UTC m=+0.114038899 container create 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 13:36:36 compute-0 podman[267178]: 2025-10-01 13:36:36.407746139 +0000 UTC m=+0.022808721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:36:36 compute-0 systemd[1]: Started libpod-conmon-38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca.scope.
Oct 01 13:36:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:36:36 compute-0 podman[267178]: 2025-10-01 13:36:36.686994589 +0000 UTC m=+0.302057231 container init 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:36:36 compute-0 podman[267178]: 2025-10-01 13:36:36.695865949 +0000 UTC m=+0.310928511 container start 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:36:36 compute-0 podman[267178]: 2025-10-01 13:36:36.700196835 +0000 UTC m=+0.315259417 container attach 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:36:36 compute-0 practical_engelbart[267194]: 167 167
Oct 01 13:36:36 compute-0 systemd[1]: libpod-38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca.scope: Deactivated successfully.
Oct 01 13:36:36 compute-0 podman[267178]: 2025-10-01 13:36:36.70412645 +0000 UTC m=+0.319189012 container died 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-edfaf09515f5a85743ce55602a9cbfea8738a929768ed36546cf16039798cef7-merged.mount: Deactivated successfully.
Oct 01 13:36:36 compute-0 podman[267178]: 2025-10-01 13:36:36.754472248 +0000 UTC m=+0.369534800 container remove 38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:36:36 compute-0 systemd[1]: libpod-conmon-38c7ec17794071636bedde7f02d16164033e5d149c5b45d980fb80352a3eb0ca.scope: Deactivated successfully.
Oct 01 13:36:36 compute-0 podman[267218]: 2025-10-01 13:36:36.933284049 +0000 UTC m=+0.043085730 container create b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:36:36 compute-0 systemd[1]: Started libpod-conmon-b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f.scope.
Oct 01 13:36:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:37 compute-0 podman[267218]: 2025-10-01 13:36:36.91334159 +0000 UTC m=+0.023143301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:36:37 compute-0 podman[267218]: 2025-10-01 13:36:37.013531281 +0000 UTC m=+0.123332982 container init b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:36:37 compute-0 podman[267218]: 2025-10-01 13:36:37.02110794 +0000 UTC m=+0.130909621 container start b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:36:37 compute-0 podman[267218]: 2025-10-01 13:36:37.025160678 +0000 UTC m=+0.134962349 container attach b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:36:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:37 compute-0 ceph-mon[74802]: pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]: {
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:     "0": [
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:         {
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "devices": [
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "/dev/loop3"
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             ],
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_name": "ceph_lv0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_size": "21470642176",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "name": "ceph_lv0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "tags": {
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.cluster_name": "ceph",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.crush_device_class": "",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.encrypted": "0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.osd_id": "0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.type": "block",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.vdo": "0"
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             },
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "type": "block",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "vg_name": "ceph_vg0"
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:         }
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:     ],
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:     "1": [
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:         {
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "devices": [
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "/dev/loop4"
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             ],
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_name": "ceph_lv1",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_size": "21470642176",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "name": "ceph_lv1",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "tags": {
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.cluster_name": "ceph",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.crush_device_class": "",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.encrypted": "0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.osd_id": "1",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.type": "block",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.vdo": "0"
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             },
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "type": "block",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "vg_name": "ceph_vg1"
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:         }
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:     ],
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:     "2": [
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:         {
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "devices": [
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "/dev/loop5"
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             ],
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_name": "ceph_lv2",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_size": "21470642176",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "name": "ceph_lv2",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "tags": {
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.cluster_name": "ceph",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.crush_device_class": "",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.encrypted": "0",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.osd_id": "2",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.type": "block",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:                 "ceph.vdo": "0"
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             },
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "type": "block",
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:             "vg_name": "ceph_vg2"
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:         }
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]:     ]
Oct 01 13:36:37 compute-0 affectionate_ramanujan[267234]: }
Oct 01 13:36:37 compute-0 systemd[1]: libpod-b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f.scope: Deactivated successfully.
Oct 01 13:36:37 compute-0 podman[267218]: 2025-10-01 13:36:37.849579229 +0000 UTC m=+0.959380910 container died b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-705388e6c69862c5cb019ebee5f22408ee04bf4b7c1560cba685992a1638f2a9-merged.mount: Deactivated successfully.
Oct 01 13:36:37 compute-0 podman[267218]: 2025-10-01 13:36:37.906162213 +0000 UTC m=+1.015963894 container remove b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:36:37 compute-0 systemd[1]: libpod-conmon-b9997d8e1e7ff771318a1b89908582097d776fd26d01eed9ee65d9e66e204f6f.scope: Deactivated successfully.
Oct 01 13:36:37 compute-0 sudo[267111]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:38 compute-0 sudo[267256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:36:38 compute-0 sudo[267256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:38 compute-0 sudo[267256]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:38 compute-0 sudo[267281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:36:38 compute-0 sudo[267281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:38 compute-0 sudo[267281]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:38 compute-0 sudo[267306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:36:38 compute-0 sudo[267306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:38 compute-0 sudo[267306]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:38 compute-0 sudo[267331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:36:38 compute-0 sudo[267331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:38 compute-0 podman[267396]: 2025-10-01 13:36:38.61003437 +0000 UTC m=+0.104212429 container create 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:36:38 compute-0 podman[267396]: 2025-10-01 13:36:38.533414843 +0000 UTC m=+0.027592922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:36:38 compute-0 systemd[1]: Started libpod-conmon-0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd.scope.
Oct 01 13:36:38 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:36:38 compute-0 podman[267396]: 2025-10-01 13:36:38.715173747 +0000 UTC m=+0.209351826 container init 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:36:38 compute-0 podman[267396]: 2025-10-01 13:36:38.72604628 +0000 UTC m=+0.220224339 container start 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:36:38 compute-0 happy_cartwright[267412]: 167 167
Oct 01 13:36:38 compute-0 systemd[1]: libpod-0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd.scope: Deactivated successfully.
Oct 01 13:36:38 compute-0 podman[267396]: 2025-10-01 13:36:38.738338748 +0000 UTC m=+0.232516837 container attach 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 13:36:38 compute-0 podman[267396]: 2025-10-01 13:36:38.738873105 +0000 UTC m=+0.233051164 container died 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-43ed8d3caa1c87e8673e13b8d0aaf3076b1f353626709188a19221466d387bca-merged.mount: Deactivated successfully.
Oct 01 13:36:38 compute-0 podman[267396]: 2025-10-01 13:36:38.775358556 +0000 UTC m=+0.269536625 container remove 0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cartwright, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:36:38 compute-0 systemd[1]: libpod-conmon-0126ab0b37e3a1b47c3e562c2a5bc53d96030b7ba82b7da5394a0584d5dea7dd.scope: Deactivated successfully.
Oct 01 13:36:38 compute-0 podman[267436]: 2025-10-01 13:36:38.938341449 +0000 UTC m=+0.050882987 container create 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:36:38 compute-0 systemd[1]: Started libpod-conmon-315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417.scope.
Oct 01 13:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:38 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:36:39 compute-0 podman[267436]: 2025-10-01 13:36:39.005980952 +0000 UTC m=+0.118522490 container init 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:36:39 compute-0 podman[267436]: 2025-10-01 13:36:38.912699759 +0000 UTC m=+0.025241347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:36:39 compute-0 podman[267436]: 2025-10-01 13:36:39.014824571 +0000 UTC m=+0.127366079 container start 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:36:39 compute-0 podman[267436]: 2025-10-01 13:36:39.018341592 +0000 UTC m=+0.130883100 container attach 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:36:39 compute-0 podman[267451]: 2025-10-01 13:36:39.040636165 +0000 UTC m=+0.068780140 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 13:36:39 compute-0 podman[267455]: 2025-10-01 13:36:39.045716546 +0000 UTC m=+0.070153105 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 01 13:36:39 compute-0 podman[267454]: 2025-10-01 13:36:39.103627653 +0000 UTC m=+0.131710656 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20250923)
Oct 01 13:36:39 compute-0 podman[267450]: 2025-10-01 13:36:39.147571709 +0000 UTC m=+0.170228601 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:36:39 compute-0 ceph-mon[74802]: pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]: {
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "osd_id": 0,
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "type": "bluestore"
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:     },
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "osd_id": 2,
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "type": "bluestore"
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:     },
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "osd_id": 1,
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:         "type": "bluestore"
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]:     }
Oct 01 13:36:39 compute-0 compassionate_hopper[267456]: }
Oct 01 13:36:40 compute-0 systemd[1]: libpod-315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417.scope: Deactivated successfully.
Oct 01 13:36:40 compute-0 podman[267436]: 2025-10-01 13:36:40.003307947 +0000 UTC m=+1.115849455 container died 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:36:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-219a71a3f7e1376966d0718e48925229f88bea106fc2b56e4b154ace68ecf593-merged.mount: Deactivated successfully.
Oct 01 13:36:40 compute-0 podman[267436]: 2025-10-01 13:36:40.060641217 +0000 UTC m=+1.173182715 container remove 315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hopper, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:36:40 compute-0 systemd[1]: libpod-conmon-315f01ebf4da10962ed15bdc3df4d136b6ac5fbc16562bf7e9b6062e6ec5f417.scope: Deactivated successfully.
Oct 01 13:36:40 compute-0 sudo[267331]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:36:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:36:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:36:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:36:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6a4bd843-b873-4285-b750-f5c5cf5d9a15 does not exist
Oct 01 13:36:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7628b70a-8f0e-4ab7-9d10-3efdbdb99eb6 does not exist
Oct 01 13:36:40 compute-0 sudo[267576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:36:40 compute-0 sudo[267576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:40 compute-0 sudo[267576]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:40 compute-0 sudo[267601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:36:40 compute-0 sudo[267601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:36:40 compute-0 sudo[267601]: pam_unix(sudo:session): session closed for user root
Oct 01 13:36:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:36:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:36:41 compute-0 ceph-mon[74802]: pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:43 compute-0 ceph-mon[74802]: pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:45 compute-0 ceph-mon[74802]: pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:47 compute-0 ceph-mon[74802]: pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:36:47
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'images']
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:36:47 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:36:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:36:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:36:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:36:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:36:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:36:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:36:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:36:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:36:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:49 compute-0 ceph-mon[74802]: pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:51 compute-0 ceph-mon[74802]: pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:52 compute-0 sshd-session[267626]: Invalid user zimbra from 27.254.137.144 port 50744
Oct 01 13:36:52 compute-0 sshd-session[267626]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:36:52 compute-0 sshd-session[267626]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:36:53 compute-0 ceph-mon[74802]: pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:54 compute-0 sshd-session[267626]: Failed password for invalid user zimbra from 27.254.137.144 port 50744 ssh2
Oct 01 13:36:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:36:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4092646705' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:36:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:36:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4092646705' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:36:55 compute-0 ceph-mon[74802]: pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/4092646705' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:36:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/4092646705' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:36:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:56 compute-0 sshd-session[267626]: Received disconnect from 27.254.137.144 port 50744:11: Bye Bye [preauth]
Oct 01 13:36:56 compute-0 sshd-session[267626]: Disconnected from invalid user zimbra 27.254.137.144 port 50744 [preauth]
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:36:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:36:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:36:57 compute-0 nova_compute[260022]: 2025-10-01 13:36:57.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:57 compute-0 nova_compute[260022]: 2025-10-01 13:36:57.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:57 compute-0 ceph-mon[74802]: pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:58 compute-0 nova_compute[260022]: 2025-10-01 13:36:58.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:58 compute-0 nova_compute[260022]: 2025-10-01 13:36:58.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:58 compute-0 nova_compute[260022]: 2025-10-01 13:36:58.344 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:36:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.362 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.387 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.387 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:36:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:36:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174718930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:36:59 compute-0 ceph-mon[74802]: pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.828 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.995 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.997 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.997 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:36:59 compute-0 nova_compute[260022]: 2025-10-01 13:36:59.997 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:37:00 compute-0 nova_compute[260022]: 2025-10-01 13:37:00.053 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:37:00 compute-0 nova_compute[260022]: 2025-10-01 13:37:00.053 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:37:00 compute-0 nova_compute[260022]: 2025-10-01 13:37:00.066 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:37:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:37:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735637092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:37:00 compute-0 nova_compute[260022]: 2025-10-01 13:37:00.533 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:37:00 compute-0 nova_compute[260022]: 2025-10-01 13:37:00.540 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:37:00 compute-0 nova_compute[260022]: 2025-10-01 13:37:00.554 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:37:00 compute-0 nova_compute[260022]: 2025-10-01 13:37:00.555 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:37:00 compute-0 nova_compute[260022]: 2025-10-01 13:37:00.556 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:37:00 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4174718930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:37:00 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1735637092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:37:01 compute-0 ceph-mon[74802]: pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:02 compute-0 nova_compute[260022]: 2025-10-01 13:37:02.539 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:37:02 compute-0 nova_compute[260022]: 2025-10-01 13:37:02.540 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:37:03 compute-0 ceph-mon[74802]: pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:05 compute-0 ceph-mon[74802]: pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:07 compute-0 ceph-mon[74802]: pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:09 compute-0 sshd-session[267672]: Invalid user huake from 80.253.31.232 port 58052
Oct 01 13:37:09 compute-0 sshd-session[267672]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:37:09 compute-0 sshd-session[267672]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.253.31.232
Oct 01 13:37:09 compute-0 podman[267677]: 2025-10-01 13:37:09.519797322 +0000 UTC m=+0.063516295 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:37:09 compute-0 podman[267676]: 2025-10-01 13:37:09.521826216 +0000 UTC m=+0.069876936 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid)
Oct 01 13:37:09 compute-0 podman[267674]: 2025-10-01 13:37:09.543516341 +0000 UTC m=+0.096496046 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:37:09 compute-0 podman[267675]: 2025-10-01 13:37:09.549357124 +0000 UTC m=+0.098254620 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Oct 01 13:37:10 compute-0 ceph-mon[74802]: pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:11 compute-0 sshd-session[267672]: Failed password for invalid user huake from 80.253.31.232 port 58052 ssh2
Oct 01 13:37:11 compute-0 sshd-session[267672]: Received disconnect from 80.253.31.232 port 58052:11: Bye Bye [preauth]
Oct 01 13:37:11 compute-0 sshd-session[267672]: Disconnected from invalid user huake 80.253.31.232 port 58052 [preauth]
Oct 01 13:37:12 compute-0 ceph-mon[74802]: pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:37:12.304 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:37:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:37:12.304 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:37:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:37:12.304 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:37:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:14 compute-0 ceph-mon[74802]: pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:15 compute-0 ceph-mon[74802]: pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:17 compute-0 ceph-mon[74802]: pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:37:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:37:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:37:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:37:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:37:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:37:18 compute-0 sshd-session[267757]: Invalid user vlado from 200.7.101.139 port 51012
Oct 01 13:37:18 compute-0 sshd-session[267757]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:37:18 compute-0 sshd-session[267757]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139
Oct 01 13:37:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:19 compute-0 ceph-mon[74802]: pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:20 compute-0 sshd-session[267757]: Failed password for invalid user vlado from 200.7.101.139 port 51012 ssh2
Oct 01 13:37:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:21 compute-0 ceph-mon[74802]: pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:22 compute-0 sshd-session[267757]: Received disconnect from 200.7.101.139 port 51012:11: Bye Bye [preauth]
Oct 01 13:37:22 compute-0 sshd-session[267757]: Disconnected from invalid user vlado 200.7.101.139 port 51012 [preauth]
Oct 01 13:37:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:23 compute-0 ceph-mon[74802]: pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:25 compute-0 ceph-mon[74802]: pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:28 compute-0 ceph-mon[74802]: pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:29 compute-0 ceph-mon[74802]: pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:31 compute-0 ceph-mon[74802]: pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:33 compute-0 ceph-mon[74802]: pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:35 compute-0 ceph-mon[74802]: pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:38 compute-0 ceph-mon[74802]: pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:39 compute-0 ceph-mon[74802]: pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:40 compute-0 sudo[267759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:37:40 compute-0 sudo[267759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:40 compute-0 sudo[267759]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:40 compute-0 podman[267784]: 2025-10-01 13:37:40.460905037 +0000 UTC m=+0.077692915 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:37:40 compute-0 sudo[267817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:37:40 compute-0 sudo[267817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:40 compute-0 podman[267790]: 2025-10-01 13:37:40.474482199 +0000 UTC m=+0.076208118 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 01 13:37:40 compute-0 sudo[267817]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:40 compute-0 podman[267783]: 2025-10-01 13:37:40.497042198 +0000 UTC m=+0.124676191 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:37:40 compute-0 podman[267791]: 2025-10-01 13:37:40.497203493 +0000 UTC m=+0.098827339 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 13:37:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:40 compute-0 sudo[267888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:37:40 compute-0 sudo[267888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:40 compute-0 sudo[267888]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:40 compute-0 sudo[267914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:37:40 compute-0 sudo[267914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:41 compute-0 sudo[267914]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:37:41 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:37:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:37:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:37:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:37:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:37:41 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev dde2e831-05ce-44cd-90db-909461227f66 does not exist
Oct 01 13:37:41 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev bc7d3c7b-3319-4fdf-9180-fc33914a14b0 does not exist
Oct 01 13:37:41 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 18e3d583-4794-4246-9405-984803ffc0a3 does not exist
Oct 01 13:37:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:37:41 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:37:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:37:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:37:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:37:41 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:37:41 compute-0 sudo[267971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:37:41 compute-0 sudo[267971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:41 compute-0 sudo[267971]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:41 compute-0 sudo[267996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:37:41 compute-0 sudo[267996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:41 compute-0 sudo[267996]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:41 compute-0 ceph-mon[74802]: pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:37:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:37:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:37:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:37:41 compute-0 sudo[268021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:37:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:37:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:37:41 compute-0 sudo[268021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:41 compute-0 sudo[268021]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:41 compute-0 sudo[268046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:37:41 compute-0 sudo[268046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:42 compute-0 podman[268110]: 2025-10-01 13:37:42.360854628 +0000 UTC m=+0.105133119 container create f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 13:37:42 compute-0 podman[268110]: 2025-10-01 13:37:42.283692341 +0000 UTC m=+0.027970882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:37:42 compute-0 systemd[1]: Started libpod-conmon-f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279.scope.
Oct 01 13:37:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:37:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:42 compute-0 podman[268110]: 2025-10-01 13:37:42.550317663 +0000 UTC m=+0.294596174 container init f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:37:42 compute-0 podman[268110]: 2025-10-01 13:37:42.563969158 +0000 UTC m=+0.308247619 container start f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:37:42 compute-0 intelligent_volhard[268127]: 167 167
Oct 01 13:37:42 compute-0 systemd[1]: libpod-f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279.scope: Deactivated successfully.
Oct 01 13:37:42 compute-0 podman[268110]: 2025-10-01 13:37:42.587005021 +0000 UTC m=+0.331283472 container attach f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:37:42 compute-0 podman[268110]: 2025-10-01 13:37:42.587660002 +0000 UTC m=+0.331938453 container died f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 13:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-520935e59eb2494666d50208b1b0411386a3a97c27ed95b14712d019e22726c3-merged.mount: Deactivated successfully.
Oct 01 13:37:42 compute-0 podman[268110]: 2025-10-01 13:37:42.743269388 +0000 UTC m=+0.487547869 container remove f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:37:42 compute-0 systemd[1]: libpod-conmon-f6d2064b6d2079a9bf95b1dabad9b8d428857cd8e23f05f278c99b6af3b0a279.scope: Deactivated successfully.
Oct 01 13:37:42 compute-0 podman[268153]: 2025-10-01 13:37:42.997999812 +0000 UTC m=+0.070646372 container create ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 13:37:43 compute-0 podman[268153]: 2025-10-01 13:37:42.954513856 +0000 UTC m=+0.027160476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:37:43 compute-0 systemd[1]: Started libpod-conmon-ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48.scope.
Oct 01 13:37:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:43 compute-0 podman[268153]: 2025-10-01 13:37:43.104115151 +0000 UTC m=+0.176761721 container init ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:37:43 compute-0 podman[268153]: 2025-10-01 13:37:43.116405672 +0000 UTC m=+0.189052222 container start ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:37:43 compute-0 podman[268153]: 2025-10-01 13:37:43.138177906 +0000 UTC m=+0.210824456 container attach ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:37:43 compute-0 ceph-mon[74802]: pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:44 compute-0 charming_brahmagupta[268169]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:37:44 compute-0 charming_brahmagupta[268169]: --> relative data size: 1.0
Oct 01 13:37:44 compute-0 charming_brahmagupta[268169]: --> All data devices are unavailable
Oct 01 13:37:44 compute-0 systemd[1]: libpod-ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48.scope: Deactivated successfully.
Oct 01 13:37:44 compute-0 podman[268153]: 2025-10-01 13:37:44.229516504 +0000 UTC m=+1.302163094 container died ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:37:44 compute-0 systemd[1]: libpod-ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48.scope: Consumed 1.058s CPU time.
Oct 01 13:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f71da573ef89bc105f0954d73a72e2303e1de8acfc799237e287739d13209e1d-merged.mount: Deactivated successfully.
Oct 01 13:37:44 compute-0 podman[268153]: 2025-10-01 13:37:44.298960356 +0000 UTC m=+1.371606906 container remove ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brahmagupta, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:37:44 compute-0 systemd[1]: libpod-conmon-ce8617e5f93ca1d3ebcd61e5310085917f8b9c675c7db6165f721e4bb7b06b48.scope: Deactivated successfully.
Oct 01 13:37:44 compute-0 sudo[268046]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:44 compute-0 sudo[268212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:37:44 compute-0 sudo[268212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:44 compute-0 sudo[268212]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:44 compute-0 sudo[268237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:37:44 compute-0 sudo[268237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:44 compute-0 sudo[268237]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:44 compute-0 sudo[268262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:37:44 compute-0 sudo[268262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:44 compute-0 sudo[268262]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:44 compute-0 sudo[268287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:37:44 compute-0 sudo[268287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:45 compute-0 podman[268353]: 2025-10-01 13:37:45.020493645 +0000 UTC m=+0.044690654 container create 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:37:45 compute-0 systemd[1]: Started libpod-conmon-910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2.scope.
Oct 01 13:37:45 compute-0 podman[268353]: 2025-10-01 13:37:44.999852879 +0000 UTC m=+0.024049918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:37:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:37:45 compute-0 podman[268353]: 2025-10-01 13:37:45.128147574 +0000 UTC m=+0.152344613 container init 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:37:45 compute-0 podman[268353]: 2025-10-01 13:37:45.136230962 +0000 UTC m=+0.160427971 container start 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:37:45 compute-0 podman[268353]: 2025-10-01 13:37:45.140531028 +0000 UTC m=+0.164728087 container attach 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:37:45 compute-0 gifted_heisenberg[268369]: 167 167
Oct 01 13:37:45 compute-0 systemd[1]: libpod-910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2.scope: Deactivated successfully.
Oct 01 13:37:45 compute-0 podman[268353]: 2025-10-01 13:37:45.147274754 +0000 UTC m=+0.171471763 container died 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-82871bd6244f213aec242db39b19e2099c9b72c78a965538d70e14ec2bad8c7d-merged.mount: Deactivated successfully.
Oct 01 13:37:45 compute-0 podman[268353]: 2025-10-01 13:37:45.196094559 +0000 UTC m=+0.220291568 container remove 910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 13:37:45 compute-0 systemd[1]: libpod-conmon-910bfb0bd2182f223269c233580ffa218eb85f58ae21fbc7ba06c92e7bbf0af2.scope: Deactivated successfully.
Oct 01 13:37:45 compute-0 podman[268395]: 2025-10-01 13:37:45.398121512 +0000 UTC m=+0.053439503 container create cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:37:45 compute-0 systemd[1]: Started libpod-conmon-cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3.scope.
Oct 01 13:37:45 compute-0 podman[268395]: 2025-10-01 13:37:45.371301129 +0000 UTC m=+0.026619160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:37:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:45 compute-0 podman[268395]: 2025-10-01 13:37:45.494351098 +0000 UTC m=+0.149669099 container init cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:37:45 compute-0 podman[268395]: 2025-10-01 13:37:45.506820985 +0000 UTC m=+0.162139006 container start cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:37:45 compute-0 podman[268395]: 2025-10-01 13:37:45.51106663 +0000 UTC m=+0.166384601 container attach cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct 01 13:37:45 compute-0 ceph-mon[74802]: pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:46 compute-0 musing_albattani[268411]: {
Oct 01 13:37:46 compute-0 musing_albattani[268411]:     "0": [
Oct 01 13:37:46 compute-0 musing_albattani[268411]:         {
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "devices": [
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "/dev/loop3"
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             ],
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_name": "ceph_lv0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_size": "21470642176",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "name": "ceph_lv0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "tags": {
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.cluster_name": "ceph",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.crush_device_class": "",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.encrypted": "0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.osd_id": "0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.type": "block",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.vdo": "0"
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             },
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "type": "block",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "vg_name": "ceph_vg0"
Oct 01 13:37:46 compute-0 musing_albattani[268411]:         }
Oct 01 13:37:46 compute-0 musing_albattani[268411]:     ],
Oct 01 13:37:46 compute-0 musing_albattani[268411]:     "1": [
Oct 01 13:37:46 compute-0 musing_albattani[268411]:         {
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "devices": [
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "/dev/loop4"
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             ],
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_name": "ceph_lv1",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_size": "21470642176",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "name": "ceph_lv1",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "tags": {
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.cluster_name": "ceph",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.crush_device_class": "",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.encrypted": "0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.osd_id": "1",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.type": "block",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.vdo": "0"
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             },
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "type": "block",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "vg_name": "ceph_vg1"
Oct 01 13:37:46 compute-0 musing_albattani[268411]:         }
Oct 01 13:37:46 compute-0 musing_albattani[268411]:     ],
Oct 01 13:37:46 compute-0 musing_albattani[268411]:     "2": [
Oct 01 13:37:46 compute-0 musing_albattani[268411]:         {
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "devices": [
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "/dev/loop5"
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             ],
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_name": "ceph_lv2",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_size": "21470642176",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "name": "ceph_lv2",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "tags": {
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.cluster_name": "ceph",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.crush_device_class": "",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.encrypted": "0",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.osd_id": "2",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.type": "block",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:                 "ceph.vdo": "0"
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             },
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "type": "block",
Oct 01 13:37:46 compute-0 musing_albattani[268411]:             "vg_name": "ceph_vg2"
Oct 01 13:37:46 compute-0 musing_albattani[268411]:         }
Oct 01 13:37:46 compute-0 musing_albattani[268411]:     ]
Oct 01 13:37:46 compute-0 musing_albattani[268411]: }
Oct 01 13:37:46 compute-0 systemd[1]: libpod-cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3.scope: Deactivated successfully.
Oct 01 13:37:46 compute-0 podman[268395]: 2025-10-01 13:37:46.328779563 +0000 UTC m=+0.984097564 container died cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-77935a820aa580a0cfd2be8e2f8ebe4c3383bcc13fdd65c012ecd41b40aecba6-merged.mount: Deactivated successfully.
Oct 01 13:37:46 compute-0 podman[268395]: 2025-10-01 13:37:46.386659676 +0000 UTC m=+1.041977647 container remove cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:37:46 compute-0 systemd[1]: libpod-conmon-cdbe4740dc525c8762da52f2836b07999fb88f5e98b4203de2397ed637b5ebe3.scope: Deactivated successfully.
Oct 01 13:37:46 compute-0 sudo[268287]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:46 compute-0 sudo[268431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:37:46 compute-0 sudo[268431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:46 compute-0 sudo[268431]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:46 compute-0 sudo[268456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:37:46 compute-0 sudo[268456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:46 compute-0 sudo[268456]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:46 compute-0 sudo[268481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:37:46 compute-0 sudo[268481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:46 compute-0 sudo[268481]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:46 compute-0 sudo[268506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:37:46 compute-0 sudo[268506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:47 compute-0 podman[268574]: 2025-10-01 13:37:47.150956298 +0000 UTC m=+0.065964051 container create 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:37:47 compute-0 systemd[1]: Started libpod-conmon-147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a.scope.
Oct 01 13:37:47 compute-0 podman[268574]: 2025-10-01 13:37:47.124588739 +0000 UTC m=+0.039596512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:37:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:37:47 compute-0 podman[268574]: 2025-10-01 13:37:47.256213891 +0000 UTC m=+0.171221704 container init 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:37:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:47 compute-0 podman[268574]: 2025-10-01 13:37:47.269905817 +0000 UTC m=+0.184913530 container start 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:37:47 compute-0 podman[268574]: 2025-10-01 13:37:47.274257355 +0000 UTC m=+0.189265318 container attach 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:37:47 compute-0 reverent_johnson[268590]: 167 167
Oct 01 13:37:47 compute-0 systemd[1]: libpod-147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a.scope: Deactivated successfully.
Oct 01 13:37:47 compute-0 podman[268574]: 2025-10-01 13:37:47.279209193 +0000 UTC m=+0.194216906 container died 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6eb0707b7d50c4e2aa513e57b3a1377d282533727cc5b967c48e4ea7f9aa6b3-merged.mount: Deactivated successfully.
Oct 01 13:37:47 compute-0 podman[268574]: 2025-10-01 13:37:47.325652443 +0000 UTC m=+0.240660146 container remove 147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_johnson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:37:47 compute-0 systemd[1]: libpod-conmon-147d5631b59654dc4ec8b37fd1f704c7c5fac5abc1e5e04a5ac0800f3018be1a.scope: Deactivated successfully.
Oct 01 13:37:47 compute-0 podman[268614]: 2025-10-01 13:37:47.505565553 +0000 UTC m=+0.056622865 container create 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:37:47 compute-0 systemd[1]: Started libpod-conmon-6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143.scope.
Oct 01 13:37:47 compute-0 podman[268614]: 2025-10-01 13:37:47.478319175 +0000 UTC m=+0.029376517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:37:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:37:47 compute-0 podman[268614]: 2025-10-01 13:37:47.618557061 +0000 UTC m=+0.169614413 container init 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:37:47 compute-0 podman[268614]: 2025-10-01 13:37:47.627542547 +0000 UTC m=+0.178599889 container start 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:37:47 compute-0 podman[268614]: 2025-10-01 13:37:47.639422226 +0000 UTC m=+0.190479538 container attach 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:37:47
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'images', '.rgw.root']
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:37:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:37:47 compute-0 ceph-mon[74802]: pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]: {
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "osd_id": 0,
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "type": "bluestore"
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:     },
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "osd_id": 2,
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "type": "bluestore"
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:     },
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "osd_id": 1,
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:         "type": "bluestore"
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]:     }
Oct 01 13:37:48 compute-0 vibrant_hugle[268630]: }
Oct 01 13:37:48 compute-0 systemd[1]: libpod-6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143.scope: Deactivated successfully.
Oct 01 13:37:48 compute-0 systemd[1]: libpod-6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143.scope: Consumed 1.083s CPU time.
Oct 01 13:37:48 compute-0 podman[268614]: 2025-10-01 13:37:48.701928305 +0000 UTC m=+1.252985617 container died 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-67e6d65f71ea242fdb3ec035e35e6fecfbb38425dbce169ac6bbc88065d257b9-merged.mount: Deactivated successfully.
Oct 01 13:37:48 compute-0 podman[268614]: 2025-10-01 13:37:48.768870277 +0000 UTC m=+1.319927579 container remove 6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:37:48 compute-0 systemd[1]: libpod-conmon-6769ee939c74fa7c3b0b48423353d4803382233abec8b407806b2a85006a4143.scope: Deactivated successfully.
Oct 01 13:37:48 compute-0 sudo[268506]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:37:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:37:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:37:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a732560-0f33-4d9f-8372-514eb3f6275f does not exist
Oct 01 13:37:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 430b7261-8409-42b6-ba91-dd87474caa77 does not exist
Oct 01 13:37:48 compute-0 sudo[268673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:37:48 compute-0 sudo[268673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:48 compute-0 sudo[268673]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:48 compute-0 sudo[268698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:37:48 compute-0 sudo[268698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:37:48 compute-0 sudo[268698]: pam_unix(sudo:session): session closed for user root
Oct 01 13:37:49 compute-0 ceph-mon[74802]: pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:37:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:37:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:51 compute-0 ceph-mon[74802]: pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:54 compute-0 ceph-mon[74802]: pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:55 compute-0 ceph-mon[74802]: pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:37:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148074225' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:37:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:37:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148074225' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:37:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1148074225' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:37:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1148074225' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:37:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:57 compute-0 ceph-mon[74802]: pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:37:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:37:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:37:58 compute-0 nova_compute[260022]: 2025-10-01 13:37:58.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:37:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.343 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.344 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.344 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.358 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.358 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.358 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.393 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.394 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.394 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.394 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.394 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:37:59 compute-0 ceph-mon[74802]: pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:37:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:37:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3977658984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:37:59 compute-0 nova_compute[260022]: 2025-10-01 13:37:59.869 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.049 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.051 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.051 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.052 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.125 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.125 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.140 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:38:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:38:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4579 writes, 20K keys, 4579 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4579 writes, 4579 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1270 writes, 5576 keys, 1270 commit groups, 1.0 writes per commit group, ingest: 8.38 MB, 0.01 MB/s
                                           Interval WAL: 1271 writes, 1271 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.4      1.88              0.08        11    0.171       0      0       0.0       0.0
                                             L6      1/0    7.18 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     28.2     23.5      3.00              0.26        10    0.300     43K   5162       0.0       0.0
                                            Sum      1/0    7.18 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     17.3     18.8      4.88              0.34        21    0.232     43K   5162       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.3     10.7     10.8      3.13              0.14         8    0.392     18K   1960       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     28.2     23.5      3.00              0.26        10    0.300     43K   5162       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.4      1.86              0.08        10    0.186       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.021, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 4.9 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 3.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 308.00 MB usage: 6.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00013 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(416,6.23 MB,2.02434%) FilterBlock(22,128.55 KB,0.0407578%) IndexBlock(22,240.75 KB,0.0763336%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 13:38:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:38:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573693310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.566 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.574 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.636 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.638 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:38:00 compute-0 nova_compute[260022]: 2025-10-01 13:38:00.638 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:38:00 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3977658984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:38:00 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3573693310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:38:01 compute-0 nova_compute[260022]: 2025-10-01 13:38:01.625 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:38:01 compute-0 nova_compute[260022]: 2025-10-01 13:38:01.626 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:38:01 compute-0 nova_compute[260022]: 2025-10-01 13:38:01.627 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:38:01 compute-0 ceph-mon[74802]: pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:03 compute-0 sshd-session[268767]: Invalid user subzero from 27.254.137.144 port 46308
Oct 01 13:38:03 compute-0 sshd-session[268767]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:38:03 compute-0 sshd-session[268767]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:38:03 compute-0 ceph-mon[74802]: pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:04 compute-0 nova_compute[260022]: 2025-10-01 13:38:04.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:38:04 compute-0 nova_compute[260022]: 2025-10-01 13:38:04.358 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:38:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:05 compute-0 sshd-session[268767]: Failed password for invalid user subzero from 27.254.137.144 port 46308 ssh2
Oct 01 13:38:05 compute-0 ceph-mon[74802]: pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:07 compute-0 sshd-session[268767]: Received disconnect from 27.254.137.144 port 46308:11: Bye Bye [preauth]
Oct 01 13:38:07 compute-0 sshd-session[268767]: Disconnected from invalid user subzero 27.254.137.144 port 46308 [preauth]
Oct 01 13:38:07 compute-0 ceph-mon[74802]: pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:10 compute-0 ceph-mon[74802]: pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:11 compute-0 podman[268771]: 2025-10-01 13:38:11.545708806 +0000 UTC m=+0.080408552 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, managed_by=edpm_ansible, tcib_managed=true)
Oct 01 13:38:11 compute-0 podman[268770]: 2025-10-01 13:38:11.556380767 +0000 UTC m=+0.098543870 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 13:38:11 compute-0 podman[268769]: 2025-10-01 13:38:11.58256273 +0000 UTC m=+0.125498108 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 01 13:38:11 compute-0 podman[268772]: 2025-10-01 13:38:11.582630222 +0000 UTC m=+0.106400220 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible)
Oct 01 13:38:12 compute-0 ceph-mon[74802]: pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:38:12.306 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:38:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:38:12.306 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:38:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:38:12.306 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:38:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:14 compute-0 ceph-mon[74802]: pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:15 compute-0 ceph-mon[74802]: pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:17 compute-0 ceph-mon[74802]: pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:38:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:38:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:38:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:38:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:38:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:38:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:19 compute-0 ceph-mon[74802]: pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:21 compute-0 ceph-mon[74802]: pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:23 compute-0 ceph-mon[74802]: pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:25 compute-0 ceph-mon[74802]: pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.722658) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906722817, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1317, "num_deletes": 251, "total_data_size": 2071375, "memory_usage": 2105840, "flush_reason": "Manual Compaction"}
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906768194, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2041265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19726, "largest_seqno": 21042, "table_properties": {"data_size": 2035024, "index_size": 3508, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12922, "raw_average_key_size": 19, "raw_value_size": 2022541, "raw_average_value_size": 3087, "num_data_blocks": 161, "num_entries": 655, "num_filter_entries": 655, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325769, "oldest_key_time": 1759325769, "file_creation_time": 1759325906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 45582 microseconds, and 7853 cpu microseconds.
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.768256) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2041265 bytes OK
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.768284) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.779923) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.779938) EVENT_LOG_v1 {"time_micros": 1759325906779933, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.779960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2065481, prev total WAL file size 2065481, number of live WAL files 2.
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.780832) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1993KB)], [47(7348KB)]
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906780928, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9566169, "oldest_snapshot_seqno": -1}
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4351 keys, 7793979 bytes, temperature: kUnknown
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906902509, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7793979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7763402, "index_size": 18627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107584, "raw_average_key_size": 24, "raw_value_size": 7683108, "raw_average_value_size": 1765, "num_data_blocks": 780, "num_entries": 4351, "num_filter_entries": 4351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759325906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.902895) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7793979 bytes
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.910579) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.7 rd, 64.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(8.5) write-amplify(3.8) OK, records in: 4865, records dropped: 514 output_compression: NoCompression
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.910623) EVENT_LOG_v1 {"time_micros": 1759325906910604, "job": 24, "event": "compaction_finished", "compaction_time_micros": 121530, "compaction_time_cpu_micros": 21482, "output_level": 6, "num_output_files": 1, "total_output_size": 7793979, "num_input_records": 4865, "num_output_records": 4351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906911454, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759325906914370, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.780582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:38:26 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:38:26.914441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:38:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:27 compute-0 ceph-mon[74802]: pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:29 compute-0 ceph-mon[74802]: pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:31 compute-0 ceph-mon[74802]: pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:32 compute-0 sshd-session[268848]: Invalid user seekcy from 200.7.101.139 port 45728
Oct 01 13:38:32 compute-0 sshd-session[268848]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:38:32 compute-0 sshd-session[268848]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=200.7.101.139
Oct 01 13:38:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:33 compute-0 sshd-session[268848]: Failed password for invalid user seekcy from 200.7.101.139 port 45728 ssh2
Oct 01 13:38:33 compute-0 ceph-mon[74802]: pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:35 compute-0 sshd-session[268848]: Received disconnect from 200.7.101.139 port 45728:11: Bye Bye [preauth]
Oct 01 13:38:35 compute-0 sshd-session[268848]: Disconnected from invalid user seekcy 200.7.101.139 port 45728 [preauth]
Oct 01 13:38:35 compute-0 ceph-mon[74802]: pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:37 compute-0 ceph-mon[74802]: pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:39 compute-0 ceph-mon[74802]: pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:41 compute-0 ceph-mon[74802]: pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:42 compute-0 podman[268853]: 2025-10-01 13:38:42.544615668 +0000 UTC m=+0.078985527 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 01 13:38:42 compute-0 podman[268851]: 2025-10-01 13:38:42.549814033 +0000 UTC m=+0.099356396 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=multipathd)
Oct 01 13:38:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:42 compute-0 podman[268852]: 2025-10-01 13:38:42.565343017 +0000 UTC m=+0.103485206 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 01 13:38:42 compute-0 podman[268850]: 2025-10-01 13:38:42.567350432 +0000 UTC m=+0.110709217 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller)
Oct 01 13:38:43 compute-0 ceph-mon[74802]: pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:46 compute-0 ceph-mon[74802]: pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:38:47
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Oct 01 13:38:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:38:48 compute-0 ceph-mon[74802]: pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:48 compute-0 ceph-mgr[75103]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2102413293
Oct 01 13:38:49 compute-0 sudo[268931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:38:49 compute-0 sudo[268931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:49 compute-0 sudo[268931]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:49 compute-0 sudo[268956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:38:49 compute-0 sudo[268956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:49 compute-0 sudo[268956]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:49 compute-0 sudo[268981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:38:49 compute-0 sudo[268981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:49 compute-0 sudo[268981]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:49 compute-0 sudo[269006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:38:49 compute-0 sudo[269006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:49 compute-0 sudo[269006]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:38:49 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:38:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:38:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:38:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:38:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:38:49 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f0a1ee33-861d-4a9d-90e8-6ffea26099de does not exist
Oct 01 13:38:49 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 61949868-af6d-42ed-9d9e-5d713c80bfba does not exist
Oct 01 13:38:49 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 68073741-b2c9-49da-8159-19d92e77b2bd does not exist
Oct 01 13:38:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:38:49 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:38:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:38:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:38:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:38:49 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:38:49 compute-0 sudo[269063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:38:49 compute-0 sudo[269063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:49 compute-0 sudo[269063]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:50 compute-0 sudo[269088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:38:50 compute-0 sudo[269088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:50 compute-0 sudo[269088]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:50 compute-0 ceph-mon[74802]: pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:38:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:38:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:38:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:38:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:38:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:38:50 compute-0 sudo[269113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:38:50 compute-0 sudo[269113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:50 compute-0 sudo[269113]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:50 compute-0 sudo[269138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:38:50 compute-0 sudo[269138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:50 compute-0 podman[269203]: 2025-10-01 13:38:50.619792913 +0000 UTC m=+0.028115357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:38:50 compute-0 podman[269203]: 2025-10-01 13:38:50.828874913 +0000 UTC m=+0.237197347 container create bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 01 13:38:50 compute-0 systemd[1]: Started libpod-conmon-bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070.scope.
Oct 01 13:38:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:38:51 compute-0 podman[269203]: 2025-10-01 13:38:51.0900493 +0000 UTC m=+0.498371824 container init bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:38:51 compute-0 podman[269203]: 2025-10-01 13:38:51.100492663 +0000 UTC m=+0.508815127 container start bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:38:51 compute-0 xenodochial_fermi[269219]: 167 167
Oct 01 13:38:51 compute-0 systemd[1]: libpod-bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070.scope: Deactivated successfully.
Oct 01 13:38:51 compute-0 podman[269203]: 2025-10-01 13:38:51.119786998 +0000 UTC m=+0.528109532 container attach bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:38:51 compute-0 podman[269203]: 2025-10-01 13:38:51.120394357 +0000 UTC m=+0.528716831 container died bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:38:51 compute-0 ceph-mon[74802]: pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-05507c08c8b486be2aa641a19272566fad8149c78fc2eb02dfb3b00eadee43e0-merged.mount: Deactivated successfully.
Oct 01 13:38:51 compute-0 podman[269203]: 2025-10-01 13:38:51.610779255 +0000 UTC m=+1.019101729 container remove bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:38:51 compute-0 systemd[1]: libpod-conmon-bc355413002bd499324a887fdf26dff49debf14ed1b9ff9f52c8f87ddbf16070.scope: Deactivated successfully.
Oct 01 13:38:51 compute-0 podman[269246]: 2025-10-01 13:38:51.844888901 +0000 UTC m=+0.062082218 container create 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:38:51 compute-0 systemd[1]: Started libpod-conmon-1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1.scope.
Oct 01 13:38:51 compute-0 podman[269246]: 2025-10-01 13:38:51.816231819 +0000 UTC m=+0.033425226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:38:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:51 compute-0 podman[269246]: 2025-10-01 13:38:51.949489902 +0000 UTC m=+0.166683309 container init 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:38:51 compute-0 podman[269246]: 2025-10-01 13:38:51.961259097 +0000 UTC m=+0.178452444 container start 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 13:38:51 compute-0 podman[269246]: 2025-10-01 13:38:51.968293492 +0000 UTC m=+0.185486839 container attach 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:38:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:53 compute-0 trusting_germain[269263]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:38:53 compute-0 trusting_germain[269263]: --> relative data size: 1.0
Oct 01 13:38:53 compute-0 trusting_germain[269263]: --> All data devices are unavailable
Oct 01 13:38:53 compute-0 systemd[1]: libpod-1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1.scope: Deactivated successfully.
Oct 01 13:38:53 compute-0 systemd[1]: libpod-1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1.scope: Consumed 1.239s CPU time.
Oct 01 13:38:53 compute-0 podman[269246]: 2025-10-01 13:38:53.25526918 +0000 UTC m=+1.472462537 container died 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1146e281dc32e517ec5b4dc6db094bb5c86c551329e2b6b0cf2d0f1d57785cb-merged.mount: Deactivated successfully.
Oct 01 13:38:53 compute-0 podman[269246]: 2025-10-01 13:38:53.343080687 +0000 UTC m=+1.560274044 container remove 1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 13:38:53 compute-0 systemd[1]: libpod-conmon-1e0db75a21f8f56fc8b2ea44a38eb0f1b76ac33688a8fe9f1e9b4a5978996cc1.scope: Deactivated successfully.
Oct 01 13:38:53 compute-0 sudo[269138]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:53 compute-0 sudo[269306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:38:53 compute-0 sudo[269306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:53 compute-0 sudo[269306]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:53 compute-0 sudo[269331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:38:53 compute-0 sudo[269331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:53 compute-0 sudo[269331]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:53 compute-0 sudo[269356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:38:53 compute-0 sudo[269356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:53 compute-0 sudo[269356]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:53 compute-0 ceph-mon[74802]: pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:53 compute-0 sudo[269381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:38:53 compute-0 sudo[269381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:54 compute-0 podman[269447]: 2025-10-01 13:38:54.111572913 +0000 UTC m=+0.062248034 container create 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:38:54 compute-0 systemd[1]: Started libpod-conmon-6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be.scope.
Oct 01 13:38:54 compute-0 podman[269447]: 2025-10-01 13:38:54.088860019 +0000 UTC m=+0.039535240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:38:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:38:54 compute-0 podman[269447]: 2025-10-01 13:38:54.202198779 +0000 UTC m=+0.152873950 container init 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:38:54 compute-0 podman[269447]: 2025-10-01 13:38:54.214017525 +0000 UTC m=+0.164692646 container start 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:38:54 compute-0 eloquent_fermi[269464]: 167 167
Oct 01 13:38:54 compute-0 podman[269447]: 2025-10-01 13:38:54.219124488 +0000 UTC m=+0.169799679 container attach 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:38:54 compute-0 systemd[1]: libpod-6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be.scope: Deactivated successfully.
Oct 01 13:38:54 compute-0 podman[269447]: 2025-10-01 13:38:54.220950886 +0000 UTC m=+0.171626007 container died 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 13:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-86f7aba451db1d896ab3f75e164b65473f228118dbe10f323e89e39a07814e86-merged.mount: Deactivated successfully.
Oct 01 13:38:54 compute-0 podman[269447]: 2025-10-01 13:38:54.268633655 +0000 UTC m=+0.219308786 container remove 6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:38:54 compute-0 systemd[1]: libpod-conmon-6e69490bed35b07d3ccd29e72315aa9283b2d7b09a2ea760185febca278527be.scope: Deactivated successfully.
Oct 01 13:38:54 compute-0 podman[269487]: 2025-10-01 13:38:54.482808086 +0000 UTC m=+0.063761172 container create 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:38:54 compute-0 systemd[1]: Started libpod-conmon-9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a.scope.
Oct 01 13:38:54 compute-0 podman[269487]: 2025-10-01 13:38:54.45561673 +0000 UTC m=+0.036569876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:38:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:54 compute-0 podman[269487]: 2025-10-01 13:38:54.586416475 +0000 UTC m=+0.167369571 container init 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 01 13:38:54 compute-0 podman[269487]: 2025-10-01 13:38:54.602886091 +0000 UTC m=+0.183839147 container start 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:38:54 compute-0 podman[269487]: 2025-10-01 13:38:54.60788687 +0000 UTC m=+0.188840046 container attach 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:38:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:38:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3256487368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:38:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:38:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3256487368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:38:55 compute-0 determined_nobel[269503]: {
Oct 01 13:38:55 compute-0 determined_nobel[269503]:     "0": [
Oct 01 13:38:55 compute-0 determined_nobel[269503]:         {
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "devices": [
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "/dev/loop3"
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             ],
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_name": "ceph_lv0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_size": "21470642176",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "name": "ceph_lv0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "tags": {
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.cluster_name": "ceph",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.crush_device_class": "",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.encrypted": "0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.osd_id": "0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.type": "block",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.vdo": "0"
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             },
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "type": "block",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "vg_name": "ceph_vg0"
Oct 01 13:38:55 compute-0 determined_nobel[269503]:         }
Oct 01 13:38:55 compute-0 determined_nobel[269503]:     ],
Oct 01 13:38:55 compute-0 determined_nobel[269503]:     "1": [
Oct 01 13:38:55 compute-0 determined_nobel[269503]:         {
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "devices": [
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "/dev/loop4"
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             ],
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_name": "ceph_lv1",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_size": "21470642176",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "name": "ceph_lv1",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "tags": {
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.cluster_name": "ceph",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.crush_device_class": "",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.encrypted": "0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.osd_id": "1",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.type": "block",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.vdo": "0"
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             },
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "type": "block",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "vg_name": "ceph_vg1"
Oct 01 13:38:55 compute-0 determined_nobel[269503]:         }
Oct 01 13:38:55 compute-0 determined_nobel[269503]:     ],
Oct 01 13:38:55 compute-0 determined_nobel[269503]:     "2": [
Oct 01 13:38:55 compute-0 determined_nobel[269503]:         {
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "devices": [
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "/dev/loop5"
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             ],
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_name": "ceph_lv2",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_size": "21470642176",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "name": "ceph_lv2",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "tags": {
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.cluster_name": "ceph",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.crush_device_class": "",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.encrypted": "0",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.osd_id": "2",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.type": "block",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:                 "ceph.vdo": "0"
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             },
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "type": "block",
Oct 01 13:38:55 compute-0 determined_nobel[269503]:             "vg_name": "ceph_vg2"
Oct 01 13:38:55 compute-0 determined_nobel[269503]:         }
Oct 01 13:38:55 compute-0 determined_nobel[269503]:     ]
Oct 01 13:38:55 compute-0 determined_nobel[269503]: }
Oct 01 13:38:55 compute-0 systemd[1]: libpod-9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a.scope: Deactivated successfully.
Oct 01 13:38:55 compute-0 podman[269487]: 2025-10-01 13:38:55.425596543 +0000 UTC m=+1.006549649 container died 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5faf69f261c71f40c38438e5d8c21bc96a201e7a17dc110c4368efe57d939b7f-merged.mount: Deactivated successfully.
Oct 01 13:38:55 compute-0 podman[269487]: 2025-10-01 13:38:55.493896218 +0000 UTC m=+1.074849274 container remove 9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nobel, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:38:55 compute-0 systemd[1]: libpod-conmon-9e7452882eba2e42deedc188dd2d267a380bc68db9684e897c6fe67b387c4b4a.scope: Deactivated successfully.
Oct 01 13:38:55 compute-0 sudo[269381]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:55 compute-0 sudo[269527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:38:55 compute-0 sudo[269527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:55 compute-0 sudo[269527]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:55 compute-0 ceph-mon[74802]: pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3256487368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:38:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3256487368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:38:55 compute-0 sudo[269552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:38:55 compute-0 sudo[269552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:55 compute-0 sudo[269552]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:55 compute-0 sudo[269577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:38:55 compute-0 sudo[269577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:55 compute-0 sudo[269577]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:55 compute-0 sudo[269602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:38:55 compute-0 sudo[269602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:56 compute-0 podman[269668]: 2025-10-01 13:38:56.279651274 +0000 UTC m=+0.045669016 container create c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:38:56 compute-0 systemd[1]: Started libpod-conmon-c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968.scope.
Oct 01 13:38:56 compute-0 podman[269668]: 2025-10-01 13:38:56.259308386 +0000 UTC m=+0.025326178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:38:56 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:38:56 compute-0 podman[269668]: 2025-10-01 13:38:56.377386457 +0000 UTC m=+0.143404239 container init c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:38:56 compute-0 podman[269668]: 2025-10-01 13:38:56.387895051 +0000 UTC m=+0.153912833 container start c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:38:56 compute-0 podman[269668]: 2025-10-01 13:38:56.393786109 +0000 UTC m=+0.159803891 container attach c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:38:56 compute-0 friendly_murdock[269685]: 167 167
Oct 01 13:38:56 compute-0 systemd[1]: libpod-c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968.scope: Deactivated successfully.
Oct 01 13:38:56 compute-0 podman[269668]: 2025-10-01 13:38:56.396976971 +0000 UTC m=+0.162994753 container died c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 13:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-794a62d10ff35c590681dd9fa0fe7f7b8c350f361763087ca1618052fc1ff6bb-merged.mount: Deactivated successfully.
Oct 01 13:38:56 compute-0 podman[269668]: 2025-10-01 13:38:56.449949038 +0000 UTC m=+0.215966830 container remove c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:38:56 compute-0 systemd[1]: libpod-conmon-c57f60b7112893fe2d3b5a32bd28cb7b9cd7d5102c93bb33d101a416ab46d968.scope: Deactivated successfully.
Oct 01 13:38:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:56 compute-0 podman[269709]: 2025-10-01 13:38:56.745568813 +0000 UTC m=+0.105430719 container create 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:38:56 compute-0 podman[269709]: 2025-10-01 13:38:56.68862598 +0000 UTC m=+0.048487936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:38:56 compute-0 systemd[1]: Started libpod-conmon-646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f.scope.
Oct 01 13:38:56 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:38:57 compute-0 podman[269709]: 2025-10-01 13:38:57.051872349 +0000 UTC m=+0.411734255 container init 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:38:57 compute-0 podman[269709]: 2025-10-01 13:38:57.063765728 +0000 UTC m=+0.423627634 container start 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:38:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:38:57 compute-0 podman[269709]: 2025-10-01 13:38:57.260373828 +0000 UTC m=+0.620235714 container attach 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:38:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:38:57 compute-0 ceph-mon[74802]: pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:58 compute-0 quirky_gates[269726]: {
Oct 01 13:38:58 compute-0 quirky_gates[269726]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "osd_id": 0,
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "type": "bluestore"
Oct 01 13:38:58 compute-0 quirky_gates[269726]:     },
Oct 01 13:38:58 compute-0 quirky_gates[269726]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "osd_id": 2,
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "type": "bluestore"
Oct 01 13:38:58 compute-0 quirky_gates[269726]:     },
Oct 01 13:38:58 compute-0 quirky_gates[269726]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "osd_id": 1,
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:38:58 compute-0 quirky_gates[269726]:         "type": "bluestore"
Oct 01 13:38:58 compute-0 quirky_gates[269726]:     }
Oct 01 13:38:58 compute-0 quirky_gates[269726]: }
Oct 01 13:38:58 compute-0 systemd[1]: libpod-646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f.scope: Deactivated successfully.
Oct 01 13:38:58 compute-0 systemd[1]: libpod-646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f.scope: Consumed 1.042s CPU time.
Oct 01 13:38:58 compute-0 podman[269709]: 2025-10-01 13:38:58.09936282 +0000 UTC m=+1.459224696 container died 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bab4be0ac48b0add178604103be54b0d1e085e6b13b75e08ca3155176ebbe63-merged.mount: Deactivated successfully.
Oct 01 13:38:58 compute-0 podman[269709]: 2025-10-01 13:38:58.257553228 +0000 UTC m=+1.617415144 container remove 646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:38:58 compute-0 systemd[1]: libpod-conmon-646aa90c61aa3e5508093cea153c350680efe39e04b17e27be6b0f6592d5af1f.scope: Deactivated successfully.
Oct 01 13:38:58 compute-0 sudo[269602]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:38:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:38:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:38:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:38:58 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 60a1e59f-8b38-49e3-93bb-f15c0c3a694f does not exist
Oct 01 13:38:58 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev bec68e06-672c-4338-a826-173fbd918c67 does not exist
Oct 01 13:38:58 compute-0 nova_compute[260022]: 2025-10-01 13:38:58.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:38:58 compute-0 sudo[269770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:38:58 compute-0 sudo[269770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:58 compute-0 sudo[269770]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:58 compute-0 sudo[269795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:38:58 compute-0 sudo[269795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:38:58 compute-0 sudo[269795]: pam_unix(sudo:session): session closed for user root
Oct 01 13:38:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:38:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:38:59 compute-0 ceph-mon[74802]: pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.366 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.368 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.368 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:38:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:38:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772859159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.829 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.994 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.995 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.996 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:38:59 compute-0 nova_compute[260022]: 2025-10-01 13:38:59.996 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:39:00 compute-0 nova_compute[260022]: 2025-10-01 13:39:00.063 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:39:00 compute-0 nova_compute[260022]: 2025-10-01 13:39:00.064 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:39:00 compute-0 nova_compute[260022]: 2025-10-01 13:39:00.077 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:39:00 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2772859159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:39:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:39:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478792039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:39:00 compute-0 nova_compute[260022]: 2025-10-01 13:39:00.505 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:39:00 compute-0 nova_compute[260022]: 2025-10-01 13:39:00.510 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:39:00 compute-0 nova_compute[260022]: 2025-10-01 13:39:00.531 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:39:00 compute-0 nova_compute[260022]: 2025-10-01 13:39:00.532 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:39:00 compute-0 nova_compute[260022]: 2025-10-01 13:39:00.533 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:39:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1478792039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:39:01 compute-0 ceph-mon[74802]: pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:01 compute-0 nova_compute[260022]: 2025-10-01 13:39:01.529 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:01 compute-0 nova_compute[260022]: 2025-10-01 13:39:01.530 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:01 compute-0 nova_compute[260022]: 2025-10-01 13:39:01.530 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:39:01 compute-0 nova_compute[260022]: 2025-10-01 13:39:01.530 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:39:01 compute-0 nova_compute[260022]: 2025-10-01 13:39:01.555 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:39:01 compute-0 nova_compute[260022]: 2025-10-01 13:39:01.556 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:01 compute-0 nova_compute[260022]: 2025-10-01 13:39:01.557 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:01 compute-0 nova_compute[260022]: 2025-10-01 13:39:01.557 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:02 compute-0 nova_compute[260022]: 2025-10-01 13:39:02.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:02 compute-0 nova_compute[260022]: 2025-10-01 13:39:02.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:39:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:03 compute-0 ceph-mon[74802]: pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:05 compute-0 ceph-mon[74802]: pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:06 compute-0 nova_compute[260022]: 2025-10-01 13:39:06.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:07 compute-0 ceph-mon[74802]: pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:09 compute-0 ceph-mon[74802]: pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:11 compute-0 ceph-mon[74802]: pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:39:12.307 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:39:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:39:12.308 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:39:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:39:12.308 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:39:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:13 compute-0 podman[269867]: 2025-10-01 13:39:13.560785 +0000 UTC m=+0.079631747 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Oct 01 13:39:13 compute-0 podman[269866]: 2025-10-01 13:39:13.586791339 +0000 UTC m=+0.112980980 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct 01 13:39:13 compute-0 podman[269865]: 2025-10-01 13:39:13.597749268 +0000 UTC m=+0.127227944 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:39:13 compute-0 podman[269864]: 2025-10-01 13:39:13.602588132 +0000 UTC m=+0.132750200 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:39:13 compute-0 ceph-mon[74802]: pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:15 compute-0 ceph-mon[74802]: pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:17 compute-0 unix_chkpwd[269944]: password check failed for user (root)
Oct 01 13:39:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:39:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:39:17 compute-0 sshd-session[269942]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144  user=root
Oct 01 13:39:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:39:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:39:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:39:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:39:17 compute-0 ceph-mon[74802]: pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:19 compute-0 sshd-session[269942]: Failed password for root from 27.254.137.144 port 41916 ssh2
Oct 01 13:39:19 compute-0 ceph-mon[74802]: pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:21 compute-0 sshd-session[269942]: Received disconnect from 27.254.137.144 port 41916:11: Bye Bye [preauth]
Oct 01 13:39:21 compute-0 sshd-session[269942]: Disconnected from authenticating user root 27.254.137.144 port 41916 [preauth]
Oct 01 13:39:22 compute-0 ceph-mon[74802]: pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:24 compute-0 ceph-mon[74802]: pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:25 compute-0 ceph-mon[74802]: pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:27 compute-0 ceph-mon[74802]: pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:29 compute-0 ceph-mon[74802]: pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:31 compute-0 ceph-mon[74802]: pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:33 compute-0 ceph-mon[74802]: pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:35 compute-0 ceph-mon[74802]: pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:37 compute-0 ceph-mon[74802]: pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:39 compute-0 ceph-mon[74802]: pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:41 compute-0 ceph-mon[74802]: pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:44 compute-0 ceph-mon[74802]: pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:44 compute-0 podman[269947]: 2025-10-01 13:39:44.524435656 +0000 UTC m=+0.070504907 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 13:39:44 compute-0 podman[269946]: 2025-10-01 13:39:44.530372515 +0000 UTC m=+0.077966625 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct 01 13:39:44 compute-0 podman[269948]: 2025-10-01 13:39:44.534363481 +0000 UTC m=+0.069843285 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:39:44 compute-0 podman[269945]: 2025-10-01 13:39:44.56570526 +0000 UTC m=+0.113446994 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:39:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:45 compute-0 ceph-mon[74802]: pgmap v1053: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 01 13:39:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct 01 13:39:47 compute-0 ceph-mon[74802]: pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct 01 13:39:47 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:39:47
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'vms', 'images', 'backups', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct 01 13:39:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:39:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct 01 13:39:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct 01 13:39:48 compute-0 ceph-mon[74802]: osdmap e135: 3 total, 3 up, 3 in
Oct 01 13:39:48 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct 01 13:39:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct 01 13:39:49 compute-0 ceph-mon[74802]: pgmap v1056: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:49 compute-0 ceph-mon[74802]: osdmap e136: 3 total, 3 up, 3 in
Oct 01 13:39:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct 01 13:39:49 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct 01 13:39:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:50 compute-0 ceph-mon[74802]: osdmap e137: 3 total, 3 up, 3 in
Oct 01 13:39:51 compute-0 ceph-mon[74802]: pgmap v1059: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:39:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:39:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 4.9 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 757 KiB/s wr, 2 op/s
Oct 01 13:39:52 compute-0 sshd-session[270026]: Connection closed by 172.105.102.42 port 34678
Oct 01 13:39:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct 01 13:39:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct 01 13:39:52 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct 01 13:39:53 compute-0 ceph-mon[74802]: pgmap v1060: 305 pgs: 305 active+clean; 4.9 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 757 KiB/s wr, 2 op/s
Oct 01 13:39:53 compute-0 ceph-mon[74802]: osdmap e138: 3 total, 3 up, 3 in
Oct 01 13:39:54 compute-0 nova_compute[260022]: 2025-10-01 13:39:54.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:54 compute-0 nova_compute[260022]: 2025-10-01 13:39:54.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 13:39:54 compute-0 nova_compute[260022]: 2025-10-01 13:39:54.363 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 13:39:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.8 MiB/s wr, 55 op/s
Oct 01 13:39:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:39:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1025663506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:39:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:39:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1025663506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:39:55 compute-0 nova_compute[260022]: 2025-10-01 13:39:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:55 compute-0 ceph-mon[74802]: pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.8 MiB/s wr, 55 op/s
Oct 01 13:39:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1025663506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:39:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1025663506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:39:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:39:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 6047 writes, 24K keys, 6047 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6047 writes, 1095 syncs, 5.52 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 297 writes, 645 keys, 297 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s
                                           Interval WAL: 297 writes, 143 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 13:39:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.2 MiB/s wr, 42 op/s
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006657947108810315 of space, bias 1.0, pg target 0.19973841326430944 quantized to 32 (current 32)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:39:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:39:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:39:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct 01 13:39:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct 01 13:39:57 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct 01 13:39:57 compute-0 ceph-mon[74802]: pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.2 MiB/s wr, 42 op/s
Oct 01 13:39:57 compute-0 ceph-mon[74802]: osdmap e139: 3 total, 3 up, 3 in
Oct 01 13:39:58 compute-0 nova_compute[260022]: 2025-10-01 13:39:58.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:58 compute-0 sudo[270027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:39:58 compute-0 sudo[270027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:39:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 13:39:58 compute-0 sudo[270027]: pam_unix(sudo:session): session closed for user root
Oct 01 13:39:58 compute-0 sudo[270052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:39:58 compute-0 sudo[270052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:39:58 compute-0 sudo[270052]: pam_unix(sudo:session): session closed for user root
Oct 01 13:39:58 compute-0 sudo[270077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:39:58 compute-0 sudo[270077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:39:58 compute-0 sudo[270077]: pam_unix(sudo:session): session closed for user root
Oct 01 13:39:58 compute-0 sudo[270102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:39:58 compute-0 sudo[270102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:39:59 compute-0 nova_compute[260022]: 2025-10-01 13:39:59.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:39:59 compute-0 nova_compute[260022]: 2025-10-01 13:39:59.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:39:59 compute-0 nova_compute[260022]: 2025-10-01 13:39:59.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:39:59 compute-0 nova_compute[260022]: 2025-10-01 13:39:59.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:39:59 compute-0 nova_compute[260022]: 2025-10-01 13:39:59.369 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:39:59 compute-0 nova_compute[260022]: 2025-10-01 13:39:59.369 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:39:59 compute-0 sudo[270102]: pam_unix(sudo:session): session closed for user root
Oct 01 13:39:59 compute-0 sudo[270180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:39:59 compute-0 sudo[270180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:39:59 compute-0 sudo[270180]: pam_unix(sudo:session): session closed for user root
Oct 01 13:39:59 compute-0 sudo[270205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:39:59 compute-0 sudo[270205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:39:59 compute-0 sudo[270205]: pam_unix(sudo:session): session closed for user root
Oct 01 13:39:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:39:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170474545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:39:59 compute-0 sudo[270230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:39:59 compute-0 sudo[270230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:39:59 compute-0 ceph-mon[74802]: pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 13:39:59 compute-0 sudo[270230]: pam_unix(sudo:session): session closed for user root
Oct 01 13:39:59 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4170474545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:39:59 compute-0 nova_compute[260022]: 2025-10-01 13:39:59.846 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:39:59 compute-0 sudo[270257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 01 13:39:59 compute-0 sudo[270257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.011 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.012 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5184MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.012 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.013 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.149 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.150 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:40:00 compute-0 sudo[270257]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:40:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:40:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:40:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:40:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:40:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:40:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:40:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:00 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5baddfde-d4dd-4b3f-9dc5-d2f2ed51a5cc does not exist
Oct 01 13:40:00 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3b1a0a70-55b3-4f5a-a503-6e4fa0bfd4e0 does not exist
Oct 01 13:40:00 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3b0ba2d3-1486-4f7f-908b-a11131ed5358 does not exist
Oct 01 13:40:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:40:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.235 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:40:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:40:00 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:40:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:40:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:40:00 compute-0 sudo[270302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:40:00 compute-0 sudo[270302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:00 compute-0 sudo[270302]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:00 compute-0 sudo[270327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:40:00 compute-0 sudo[270327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:00 compute-0 sudo[270327]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:00 compute-0 sudo[270371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:40:00 compute-0 sudo[270371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:00 compute-0 sudo[270371]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:00 compute-0 sudo[270396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:40:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.6 MiB/s wr, 45 op/s
Oct 01 13:40:00 compute-0 sudo[270396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:40:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251561042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.704 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.713 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.736 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.738 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:40:00 compute-0 nova_compute[260022]: 2025-10-01 13:40:00.739 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:40:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:40:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7211 writes, 28K keys, 7211 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7211 writes, 1430 syncs, 5.04 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 237 writes, 477 keys, 237 commit groups, 1.0 writes per commit group, ingest: 0.24 MB, 0.00 MB/s
                                           Interval WAL: 237 writes, 110 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 13:40:00 compute-0 podman[270462]: 2025-10-01 13:40:00.990514583 +0000 UTC m=+0.058136403 container create 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:40:01 compute-0 systemd[1]: Started libpod-conmon-2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2.scope.
Oct 01 13:40:01 compute-0 podman[270462]: 2025-10-01 13:40:00.963546594 +0000 UTC m=+0.031168474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:40:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:40:01 compute-0 podman[270462]: 2025-10-01 13:40:01.102031704 +0000 UTC m=+0.169653574 container init 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:40:01 compute-0 podman[270462]: 2025-10-01 13:40:01.11664478 +0000 UTC m=+0.184266600 container start 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:40:01 compute-0 podman[270462]: 2025-10-01 13:40:01.121031679 +0000 UTC m=+0.188653509 container attach 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:40:01 compute-0 inspiring_agnesi[270478]: 167 167
Oct 01 13:40:01 compute-0 systemd[1]: libpod-2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2.scope: Deactivated successfully.
Oct 01 13:40:01 compute-0 podman[270462]: 2025-10-01 13:40:01.127602199 +0000 UTC m=+0.195224029 container died 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-89a2945a9388f9532ef8c2e8cc201db7b36584a8eb7ff3c1b903f00f1f0877e8-merged.mount: Deactivated successfully.
Oct 01 13:40:01 compute-0 podman[270462]: 2025-10-01 13:40:01.185140132 +0000 UTC m=+0.252761922 container remove 2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:40:01 compute-0 systemd[1]: libpod-conmon-2c08dff9c615b9fbf0e5f3a396c2e69718e6ebb3343d77355fde838fc38187a2.scope: Deactivated successfully.
Oct 01 13:40:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:40:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:40:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:40:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:40:01 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:40:01 compute-0 ceph-mon[74802]: pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.6 MiB/s wr, 45 op/s
Oct 01 13:40:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2251561042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:40:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct 01 13:40:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct 01 13:40:01 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct 01 13:40:01 compute-0 podman[270504]: 2025-10-01 13:40:01.405250042 +0000 UTC m=+0.061829561 container create 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:40:01 compute-0 systemd[1]: Started libpod-conmon-0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d.scope.
Oct 01 13:40:01 compute-0 podman[270504]: 2025-10-01 13:40:01.375791523 +0000 UTC m=+0.032371092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:40:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:01 compute-0 podman[270504]: 2025-10-01 13:40:01.537205624 +0000 UTC m=+0.193785193 container init 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:40:01 compute-0 podman[270504]: 2025-10-01 13:40:01.551878912 +0000 UTC m=+0.208458431 container start 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:40:01 compute-0 podman[270504]: 2025-10-01 13:40:01.556090166 +0000 UTC m=+0.212669685 container attach 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:40:02 compute-0 ceph-mon[74802]: osdmap e140: 3 total, 3 up, 3 in
Oct 01 13:40:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Oct 01 13:40:02 compute-0 nova_compute[260022]: 2025-10-01 13:40:02.737 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:02 compute-0 nova_compute[260022]: 2025-10-01 13:40:02.739 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:02 compute-0 nova_compute[260022]: 2025-10-01 13:40:02.739 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:40:02 compute-0 nova_compute[260022]: 2025-10-01 13:40:02.740 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:40:02 compute-0 nice_leakey[270520]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:40:02 compute-0 nice_leakey[270520]: --> relative data size: 1.0
Oct 01 13:40:02 compute-0 nice_leakey[270520]: --> All data devices are unavailable
Oct 01 13:40:02 compute-0 nova_compute[260022]: 2025-10-01 13:40:02.759 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:40:02 compute-0 nova_compute[260022]: 2025-10-01 13:40:02.760 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:02 compute-0 nova_compute[260022]: 2025-10-01 13:40:02.761 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:02 compute-0 sshd-session[270497]: Invalid user admin from 78.128.112.74 port 37500
Oct 01 13:40:02 compute-0 systemd[1]: libpod-0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d.scope: Deactivated successfully.
Oct 01 13:40:02 compute-0 systemd[1]: libpod-0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d.scope: Consumed 1.198s CPU time.
Oct 01 13:40:02 compute-0 podman[270504]: 2025-10-01 13:40:02.792151704 +0000 UTC m=+1.448731213 container died 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac47c09b62d9667cd6bf58a4733b917ea6612ffb0e8367d0ad9d2e111b280077-merged.mount: Deactivated successfully.
Oct 01 13:40:02 compute-0 podman[270504]: 2025-10-01 13:40:02.875450496 +0000 UTC m=+1.532030015 container remove 0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_leakey, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 13:40:02 compute-0 systemd[1]: libpod-conmon-0f0b315e3661633fa284795b7d712e824d640b410732be47b9ddd4cffc95942d.scope: Deactivated successfully.
Oct 01 13:40:02 compute-0 sudo[270396]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:02 compute-0 sshd-session[270497]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:40:02 compute-0 sshd-session[270497]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=78.128.112.74
Oct 01 13:40:02 compute-0 sudo[270563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:40:02 compute-0 sudo[270563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:02 compute-0 sudo[270563]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:03 compute-0 sudo[270588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:40:03 compute-0 sudo[270588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:03 compute-0 sudo[270588]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:03 compute-0 sudo[270613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:40:03 compute-0 sudo[270613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:03 compute-0 sudo[270613]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:03 compute-0 sudo[270638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:40:03 compute-0 sudo[270638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:03 compute-0 ceph-mon[74802]: pgmap v1068: 305 pgs: 305 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Oct 01 13:40:03 compute-0 nova_compute[260022]: 2025-10-01 13:40:03.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:03 compute-0 podman[270704]: 2025-10-01 13:40:03.56757707 +0000 UTC m=+0.061769938 container create b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:40:03 compute-0 systemd[1]: Started libpod-conmon-b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a.scope.
Oct 01 13:40:03 compute-0 podman[270704]: 2025-10-01 13:40:03.540710154 +0000 UTC m=+0.034903102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:40:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:40:03 compute-0 podman[270704]: 2025-10-01 13:40:03.695130822 +0000 UTC m=+0.189323730 container init b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:40:03 compute-0 podman[270704]: 2025-10-01 13:40:03.707017401 +0000 UTC m=+0.201210259 container start b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 01 13:40:03 compute-0 zen_beaver[270720]: 167 167
Oct 01 13:40:03 compute-0 podman[270704]: 2025-10-01 13:40:03.712098563 +0000 UTC m=+0.206291481 container attach b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:40:03 compute-0 systemd[1]: libpod-b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a.scope: Deactivated successfully.
Oct 01 13:40:03 compute-0 podman[270704]: 2025-10-01 13:40:03.715847632 +0000 UTC m=+0.210040500 container died b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1eb332d16cc4de876a794bac2fb6af2c2ebf3c36cd8ad0980861c793e1fcfec-merged.mount: Deactivated successfully.
Oct 01 13:40:03 compute-0 podman[270704]: 2025-10-01 13:40:03.778456376 +0000 UTC m=+0.272649254 container remove b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:40:03 compute-0 systemd[1]: libpod-conmon-b7a9bbf3a6bd05637432c0a7e0cba9e7ee98ae03f8716a0d00e45595d9909f6a.scope: Deactivated successfully.
Oct 01 13:40:04 compute-0 podman[270745]: 2025-10-01 13:40:04.005136985 +0000 UTC m=+0.068877604 container create 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:40:04 compute-0 systemd[1]: Started libpod-conmon-1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f.scope.
Oct 01 13:40:04 compute-0 podman[270745]: 2025-10-01 13:40:03.979373384 +0000 UTC m=+0.043114073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:40:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:04 compute-0 podman[270745]: 2025-10-01 13:40:04.117256736 +0000 UTC m=+0.180997335 container init 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:40:04 compute-0 podman[270745]: 2025-10-01 13:40:04.129278159 +0000 UTC m=+0.193018758 container start 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:40:04 compute-0 podman[270745]: 2025-10-01 13:40:04.132571814 +0000 UTC m=+0.196312413 container attach 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:40:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct 01 13:40:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct 01 13:40:04 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct 01 13:40:04 compute-0 nova_compute[260022]: 2025-10-01 13:40:04.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:04 compute-0 nova_compute[260022]: 2025-10-01 13:40:04.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:40:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 KiB/s wr, 40 op/s
Oct 01 13:40:04 compute-0 sshd-session[270497]: Failed password for invalid user admin from 78.128.112.74 port 37500 ssh2
Oct 01 13:40:04 compute-0 zen_agnesi[270762]: {
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:     "0": [
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:         {
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "devices": [
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "/dev/loop3"
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             ],
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_name": "ceph_lv0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_size": "21470642176",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "name": "ceph_lv0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "tags": {
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.cluster_name": "ceph",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.crush_device_class": "",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.encrypted": "0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.osd_id": "0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.type": "block",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.vdo": "0"
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             },
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "type": "block",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "vg_name": "ceph_vg0"
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:         }
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:     ],
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:     "1": [
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:         {
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "devices": [
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "/dev/loop4"
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             ],
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_name": "ceph_lv1",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_size": "21470642176",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "name": "ceph_lv1",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "tags": {
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.cluster_name": "ceph",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.crush_device_class": "",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.encrypted": "0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.osd_id": "1",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.type": "block",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.vdo": "0"
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             },
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "type": "block",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "vg_name": "ceph_vg1"
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:         }
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:     ],
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:     "2": [
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:         {
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "devices": [
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "/dev/loop5"
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             ],
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_name": "ceph_lv2",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_size": "21470642176",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "name": "ceph_lv2",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "tags": {
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.cluster_name": "ceph",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.crush_device_class": "",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.encrypted": "0",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.osd_id": "2",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.type": "block",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:                 "ceph.vdo": "0"
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             },
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "type": "block",
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:             "vg_name": "ceph_vg2"
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:         }
Oct 01 13:40:04 compute-0 zen_agnesi[270762]:     ]
Oct 01 13:40:04 compute-0 zen_agnesi[270762]: }
Oct 01 13:40:04 compute-0 systemd[1]: libpod-1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f.scope: Deactivated successfully.
Oct 01 13:40:04 compute-0 podman[270745]: 2025-10-01 13:40:04.933818253 +0000 UTC m=+0.997558852 container died 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:40:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-456e18bb3d73fbe2c561d59aabd5a6ab34e2f29d88a3131edbca037c7e6bc189-merged.mount: Deactivated successfully.
Oct 01 13:40:05 compute-0 podman[270745]: 2025-10-01 13:40:05.00814547 +0000 UTC m=+1.071886079 container remove 1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 13:40:05 compute-0 systemd[1]: libpod-conmon-1f25bf45165c50ea0a8cca0bc2b75f600bc6fac7090268826e9d32bc24ac9d9f.scope: Deactivated successfully.
Oct 01 13:40:05 compute-0 sudo[270638]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:05 compute-0 sudo[270785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:40:05 compute-0 sudo[270785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:05 compute-0 sudo[270785]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:05 compute-0 sshd-session[270497]: Connection closed by invalid user admin 78.128.112.74 port 37500 [preauth]
Oct 01 13:40:05 compute-0 sudo[270810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:40:05 compute-0 sudo[270810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:05 compute-0 sudo[270810]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:05 compute-0 ceph-mon[74802]: osdmap e141: 3 total, 3 up, 3 in
Oct 01 13:40:05 compute-0 ceph-mon[74802]: pgmap v1070: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 KiB/s wr, 40 op/s
Oct 01 13:40:05 compute-0 sudo[270835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:40:05 compute-0 sudo[270835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:05 compute-0 sudo[270835]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:05 compute-0 nova_compute[260022]: 2025-10-01 13:40:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:05 compute-0 nova_compute[260022]: 2025-10-01 13:40:05.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 13:40:05 compute-0 sudo[270860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:40:05 compute-0 sudo[270860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:40:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6049 writes, 25K keys, 6049 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6049 writes, 1072 syncs, 5.64 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 414 writes, 984 keys, 414 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                           Interval WAL: 414 writes, 197 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 13:40:05 compute-0 podman[270925]: 2025-10-01 13:40:05.827217686 +0000 UTC m=+0.057257124 container create bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:40:05 compute-0 systemd[1]: Started libpod-conmon-bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf.scope.
Oct 01 13:40:05 compute-0 podman[270925]: 2025-10-01 13:40:05.808702007 +0000 UTC m=+0.038741475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:40:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:40:05 compute-0 podman[270925]: 2025-10-01 13:40:05.93691799 +0000 UTC m=+0.166957448 container init bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:40:05 compute-0 podman[270925]: 2025-10-01 13:40:05.944088229 +0000 UTC m=+0.174127667 container start bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:40:05 compute-0 podman[270925]: 2025-10-01 13:40:05.947608211 +0000 UTC m=+0.177647679 container attach bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:40:05 compute-0 stoic_feistel[270941]: 167 167
Oct 01 13:40:05 compute-0 systemd[1]: libpod-bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf.scope: Deactivated successfully.
Oct 01 13:40:05 compute-0 podman[270925]: 2025-10-01 13:40:05.951483894 +0000 UTC m=+0.181523332 container died bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:40:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d3408bf2eda516e3cf720f5278e52b03c2e2398e9bf6cc5845d2afd5f488fa8-merged.mount: Deactivated successfully.
Oct 01 13:40:05 compute-0 podman[270925]: 2025-10-01 13:40:05.991707236 +0000 UTC m=+0.221746674 container remove bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 13:40:06 compute-0 systemd[1]: libpod-conmon-bca3ed2d43726ee404cec2ed60f2be20ee12fb2defd5c75d1f2070bd7412c8bf.scope: Deactivated successfully.
Oct 01 13:40:06 compute-0 podman[270963]: 2025-10-01 13:40:06.193787401 +0000 UTC m=+0.059791565 container create 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 01 13:40:06 compute-0 systemd[1]: Started libpod-conmon-636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1.scope.
Oct 01 13:40:06 compute-0 podman[270963]: 2025-10-01 13:40:06.165259083 +0000 UTC m=+0.031263297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:40:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:40:06 compute-0 podman[270963]: 2025-10-01 13:40:06.302251976 +0000 UTC m=+0.168256170 container init 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 13:40:06 compute-0 podman[270963]: 2025-10-01 13:40:06.315958993 +0000 UTC m=+0.181963117 container start 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 13:40:06 compute-0 podman[270963]: 2025-10-01 13:40:06.319926089 +0000 UTC m=+0.185930303 container attach 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:40:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 13:40:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct 01 13:40:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct 01 13:40:07 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct 01 13:40:07 compute-0 nova_compute[260022]: 2025-10-01 13:40:07.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:07 compute-0 zen_rhodes[270979]: {
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "osd_id": 0,
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "type": "bluestore"
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:     },
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "osd_id": 2,
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "type": "bluestore"
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:     },
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "osd_id": 1,
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:         "type": "bluestore"
Oct 01 13:40:07 compute-0 zen_rhodes[270979]:     }
Oct 01 13:40:07 compute-0 zen_rhodes[270979]: }
Oct 01 13:40:07 compute-0 systemd[1]: libpod-636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1.scope: Deactivated successfully.
Oct 01 13:40:07 compute-0 systemd[1]: libpod-636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1.scope: Consumed 1.102s CPU time.
Oct 01 13:40:07 compute-0 podman[270963]: 2025-10-01 13:40:07.411407322 +0000 UTC m=+1.277411496 container died 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:40:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e01b02048720164818a202f238373d697f20aeb17c27c90d3d92816c67efdcf4-merged.mount: Deactivated successfully.
Oct 01 13:40:07 compute-0 podman[270963]: 2025-10-01 13:40:07.47478244 +0000 UTC m=+1.340786574 container remove 636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:40:07 compute-0 systemd[1]: libpod-conmon-636aee3649bee134e661dc063d268aa7bb45fc1794445c05b4a2a276a00fdbe1.scope: Deactivated successfully.
Oct 01 13:40:07 compute-0 sudo[270860]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:40:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:40:07 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:07 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5255da47-4a03-4c95-87e4-977bca3dc2c8 does not exist
Oct 01 13:40:07 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9f20ce1c-8ad6-484e-b30e-7edd75c67177 does not exist
Oct 01 13:40:07 compute-0 sudo[271026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:40:07 compute-0 sudo[271026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:07 compute-0 sudo[271026]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:07 compute-0 ceph-mon[74802]: pgmap v1071: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 13:40:07 compute-0 ceph-mon[74802]: osdmap e142: 3 total, 3 up, 3 in
Oct 01 13:40:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:40:07 compute-0 sudo[271051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:40:07 compute-0 sudo[271051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:40:07 compute-0 sudo[271051]: pam_unix(sudo:session): session closed for user root
Oct 01 13:40:07 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct 01 13:40:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 68 op/s
Oct 01 13:40:09 compute-0 nova_compute[260022]: 2025-10-01 13:40:09.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:09 compute-0 ceph-mon[74802]: pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 68 op/s
Oct 01 13:40:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.4 KiB/s wr, 38 op/s
Oct 01 13:40:11 compute-0 sshd-session[271076]: Connection closed by 172.105.102.42 port 41344
Oct 01 13:40:11 compute-0 sshd-session[271077]: error: Protocol major versions differ: 2 vs. 1
Oct 01 13:40:11 compute-0 sshd-session[271079]: error: Protocol major versions differ: 2 vs. 1
Oct 01 13:40:11 compute-0 sshd-session[271079]: banner exchange: Connection from 172.105.102.42 port 41376: could not read protocol version
Oct 01 13:40:11 compute-0 sshd-session[271077]: banner exchange: Connection from 172.105.102.42 port 41348: could not read protocol version
Oct 01 13:40:11 compute-0 sshd-session[271081]: Connection closed by 172.105.102.42 port 41388
Oct 01 13:40:11 compute-0 sshd-session[271078]: Unable to negotiate with 172.105.102.42 port 41362: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1 [preauth]
Oct 01 13:40:11 compute-0 sshd-session[271084]: Unable to negotiate with 172.105.102.42 port 41400: no matching host key type found. Their offer: ssh-dss [preauth]
Oct 01 13:40:11 compute-0 ceph-mon[74802]: pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.4 KiB/s wr, 38 op/s
Oct 01 13:40:11 compute-0 sshd-session[271080]: Invalid user hceqs from 172.105.102.42 port 41384
Oct 01 13:40:11 compute-0 sshd-session[271080]: Connection closed by invalid user hceqs 172.105.102.42 port 41384 [preauth]
Oct 01 13:40:11 compute-0 sshd-session[271086]: Unable to negotiate with 172.105.102.42 port 41406: no matching host key type found. Their offer: ssh-rsa [preauth]
Oct 01 13:40:11 compute-0 sshd-session[271088]: Connection closed by 172.105.102.42 port 41410 [preauth]
Oct 01 13:40:11 compute-0 sshd-session[271090]: Unable to negotiate with 172.105.102.42 port 41420: no matching host key type found. Their offer: ecdsa-sha2-nistp384 [preauth]
Oct 01 13:40:11 compute-0 sshd-session[271092]: Unable to negotiate with 172.105.102.42 port 41434: no matching host key type found. Their offer: ecdsa-sha2-nistp521 [preauth]
Oct 01 13:40:11 compute-0 sshd-session[271094]: Connection closed by 172.105.102.42 port 41442 [preauth]
Oct 01 13:40:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct 01 13:40:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:40:12.309 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:40:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:40:12.309 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:40:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:40:12.309 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:40:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct 01 13:40:12 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct 01 13:40:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 13:40:13 compute-0 ceph-mon[74802]: osdmap e143: 3 total, 3 up, 3 in
Oct 01 13:40:13 compute-0 ceph-mon[74802]: pgmap v1076: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 13:40:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 13:40:15 compute-0 podman[271099]: 2025-10-01 13:40:15.537693614 +0000 UTC m=+0.077094667 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923)
Oct 01 13:40:15 compute-0 podman[271098]: 2025-10-01 13:40:15.541812725 +0000 UTC m=+0.081345362 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:40:15 compute-0 podman[271097]: 2025-10-01 13:40:15.570834879 +0000 UTC m=+0.114483397 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:40:15 compute-0 podman[271096]: 2025-10-01 13:40:15.588686778 +0000 UTC m=+0.138125260 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct 01 13:40:15 compute-0 ceph-mon[74802]: pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 13:40:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct 01 13:40:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct 01 13:40:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct 01 13:40:16 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct 01 13:40:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:17 compute-0 ceph-mon[74802]: pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct 01 13:40:17 compute-0 ceph-mon[74802]: osdmap e144: 3 total, 3 up, 3 in
Oct 01 13:40:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:40:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:40:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:40:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:40:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:40:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:40:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 13:40:19 compute-0 ceph-mon[74802]: pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 13:40:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Oct 01 13:40:21 compute-0 ceph-mon[74802]: pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Oct 01 13:40:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:40:23 compute-0 ceph-mon[74802]: pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:40:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:40:25 compute-0 ceph-mon[74802]: pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:40:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:40:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:28 compute-0 ceph-mon[74802]: pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:40:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 01 13:40:30 compute-0 ceph-mon[74802]: pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 01 13:40:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:32 compute-0 ceph-mon[74802]: pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:34 compute-0 ceph-mon[74802]: pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:35 compute-0 unix_chkpwd[271179]: password check failed for user (root)
Oct 01 13:40:35 compute-0 sshd-session[271177]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144  user=root
Oct 01 13:40:36 compute-0 ceph-mon[74802]: pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:37 compute-0 sshd-session[271177]: Failed password for root from 27.254.137.144 port 37614 ssh2
Oct 01 13:40:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:38 compute-0 ceph-mon[74802]: pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:39 compute-0 sshd-session[271177]: Received disconnect from 27.254.137.144 port 37614:11: Bye Bye [preauth]
Oct 01 13:40:39 compute-0 sshd-session[271177]: Disconnected from authenticating user root 27.254.137.144 port 37614 [preauth]
Oct 01 13:40:40 compute-0 ceph-mon[74802]: pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:42 compute-0 ceph-mon[74802]: pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:44 compute-0 ceph-mon[74802]: pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:46 compute-0 ceph-mon[74802]: pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:46 compute-0 podman[271184]: 2025-10-01 13:40:46.524064015 +0000 UTC m=+0.069745272 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 01 13:40:46 compute-0 podman[271183]: 2025-10-01 13:40:46.549935798 +0000 UTC m=+0.101211243 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 01 13:40:46 compute-0 podman[271182]: 2025-10-01 13:40:46.560564548 +0000 UTC m=+0.111560015 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:40:46 compute-0 podman[271185]: 2025-10-01 13:40:46.562446907 +0000 UTC m=+0.103049012 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 13:40:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:47 compute-0 ceph-mon[74802]: pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:40:47
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'vms']
Oct 01 13:40:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:40:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:49 compute-0 ceph-mon[74802]: pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:49 compute-0 nova_compute[260022]: 2025-10-01 13:40:49.733 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:51 compute-0 ceph-mon[74802]: pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:53 compute-0 sshd-session[271180]: error: kex_exchange_identification: read: Connection timed out
Oct 01 13:40:53 compute-0 sshd-session[271180]: banner exchange: Connection from 14.103.127.7 port 41270: Connection timed out
Oct 01 13:40:53 compute-0 ceph-mon[74802]: pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:40:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3975874512' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:40:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:40:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3975874512' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:40:55 compute-0 ceph-mon[74802]: pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3975874512' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:40:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3975874512' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:40:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:40:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:40:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:40:57 compute-0 ceph-mon[74802]: pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:58 compute-0 nova_compute[260022]: 2025-10-01 13:40:58.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.369 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.370 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.370 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:40:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:40:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514629967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:40:59 compute-0 ceph-mon[74802]: pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:40:59 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2514629967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.796 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.983 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.985 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5183MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.985 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:40:59 compute-0 nova_compute[260022]: 2025-10-01 13:40:59.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.042 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.043 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.137 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.213 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.214 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.228 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.248 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.263 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:41:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:41:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3507632771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.700 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.705 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.719 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.721 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:41:00 compute-0 nova_compute[260022]: 2025-10-01 13:41:00.721 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:41:00 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3507632771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:41:01 compute-0 ceph-mon[74802]: pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:02 compute-0 nova_compute[260022]: 2025-10-01 13:41:02.718 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:41:03 compute-0 nova_compute[260022]: 2025-10-01 13:41:03.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:41:03 compute-0 nova_compute[260022]: 2025-10-01 13:41:03.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:41:03 compute-0 nova_compute[260022]: 2025-10-01 13:41:03.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:41:03 compute-0 nova_compute[260022]: 2025-10-01 13:41:03.358 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:41:03 compute-0 nova_compute[260022]: 2025-10-01 13:41:03.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:41:03 compute-0 ceph-mon[74802]: pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:04 compute-0 nova_compute[260022]: 2025-10-01 13:41:04.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:41:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:05 compute-0 nova_compute[260022]: 2025-10-01 13:41:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:41:05 compute-0 nova_compute[260022]: 2025-10-01 13:41:05.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:41:05 compute-0 nova_compute[260022]: 2025-10-01 13:41:05.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:41:05 compute-0 ceph-mon[74802]: pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:07 compute-0 ceph-mon[74802]: pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:07 compute-0 sudo[271305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:07 compute-0 sudo[271305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:07 compute-0 sudo[271305]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:07 compute-0 sudo[271330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:41:07 compute-0 sudo[271330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:07 compute-0 sudo[271330]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:08 compute-0 sudo[271355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:08 compute-0 sudo[271355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:08 compute-0 sudo[271355]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:08 compute-0 sudo[271380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:41:08 compute-0 sudo[271380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:08 compute-0 nova_compute[260022]: 2025-10-01 13:41:08.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:41:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 01 13:41:08 compute-0 sudo[271380]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:08 compute-0 sudo[271438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:08 compute-0 sudo[271438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:08 compute-0 sudo[271438]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:08 compute-0 sudo[271463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:41:08 compute-0 sudo[271463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:08 compute-0 sudo[271463]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:08 compute-0 sudo[271488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:08 compute-0 sudo[271488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:08 compute-0 sudo[271488]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:09 compute-0 sudo[271513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- inventory --format=json-pretty --filter-for-batch
Oct 01 13:41:09 compute-0 sudo[271513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:09 compute-0 podman[271579]: 2025-10-01 13:41:09.52054086 +0000 UTC m=+0.062699327 container create 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:41:09 compute-0 systemd[1]: Started libpod-conmon-4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d.scope.
Oct 01 13:41:09 compute-0 podman[271579]: 2025-10-01 13:41:09.488724288 +0000 UTC m=+0.030882815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:41:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:41:09 compute-0 podman[271579]: 2025-10-01 13:41:09.624539653 +0000 UTC m=+0.166698120 container init 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:41:09 compute-0 podman[271579]: 2025-10-01 13:41:09.636903417 +0000 UTC m=+0.179061884 container start 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:41:09 compute-0 podman[271579]: 2025-10-01 13:41:09.640669336 +0000 UTC m=+0.182827803 container attach 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:41:09 compute-0 inspiring_davinci[271595]: 167 167
Oct 01 13:41:09 compute-0 systemd[1]: libpod-4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d.scope: Deactivated successfully.
Oct 01 13:41:09 compute-0 podman[271579]: 2025-10-01 13:41:09.647344889 +0000 UTC m=+0.189503346 container died 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f222edc7aa2da8b2f5b6ad4fab448c06a627ad6eefaed21726a9d69ab36fa91-merged.mount: Deactivated successfully.
Oct 01 13:41:09 compute-0 podman[271579]: 2025-10-01 13:41:09.700216653 +0000 UTC m=+0.242375120 container remove 4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_davinci, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:41:09 compute-0 systemd[1]: libpod-conmon-4b572117b2e43c63c0beb8bccf45ae309d7029652f55b4a67fd66a371f1e049d.scope: Deactivated successfully.
Oct 01 13:41:09 compute-0 ceph-mon[74802]: pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 01 13:41:09 compute-0 podman[271621]: 2025-10-01 13:41:09.953692427 +0000 UTC m=+0.059693163 container create 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:41:10 compute-0 systemd[1]: Started libpod-conmon-5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7.scope.
Oct 01 13:41:10 compute-0 podman[271621]: 2025-10-01 13:41:09.931665554 +0000 UTC m=+0.037666300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:41:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:10 compute-0 podman[271621]: 2025-10-01 13:41:10.07720569 +0000 UTC m=+0.183206516 container init 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:41:10 compute-0 podman[271621]: 2025-10-01 13:41:10.091427022 +0000 UTC m=+0.197427758 container start 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 01 13:41:10 compute-0 podman[271621]: 2025-10-01 13:41:10.095796932 +0000 UTC m=+0.201797738 container attach 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:41:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 01 13:41:11 compute-0 sharp_shamir[271638]: [
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:     {
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         "available": false,
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         "ceph_device": false,
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         "lsm_data": {},
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         "lvs": [],
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         "path": "/dev/sr0",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         "rejected_reasons": [
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "Has a FileSystem",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "Insufficient space (<5GB)"
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         ],
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         "sys_api": {
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "actuators": null,
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "device_nodes": "sr0",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "devname": "sr0",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "human_readable_size": "482.00 KB",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "id_bus": "ata",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "model": "QEMU DVD-ROM",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "nr_requests": "2",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "parent": "/dev/sr0",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "partitions": {},
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "path": "/dev/sr0",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "removable": "1",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "rev": "2.5+",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "ro": "0",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "rotational": "0",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "sas_address": "",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "sas_device_handle": "",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "scheduler_mode": "mq-deadline",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "sectors": 0,
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "sectorsize": "2048",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "size": 493568.0,
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "support_discard": "2048",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "type": "disk",
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:             "vendor": "QEMU"
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:         }
Oct 01 13:41:11 compute-0 sharp_shamir[271638]:     }
Oct 01 13:41:11 compute-0 sharp_shamir[271638]: ]
Oct 01 13:41:11 compute-0 systemd[1]: libpod-5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7.scope: Deactivated successfully.
Oct 01 13:41:11 compute-0 systemd[1]: libpod-5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7.scope: Consumed 1.637s CPU time.
Oct 01 13:41:11 compute-0 podman[271621]: 2025-10-01 13:41:11.647604556 +0000 UTC m=+1.753605282 container died 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-becb0cc3d23acb191de693ba461a5c48e6bcfc4c4073d813e7823bfbc2ddba60-merged.mount: Deactivated successfully.
Oct 01 13:41:11 compute-0 podman[271621]: 2025-10-01 13:41:11.720400914 +0000 UTC m=+1.826401620 container remove 5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shamir, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:41:11 compute-0 systemd[1]: libpod-conmon-5740da55427ca94f64e12ca9c32d28bf55380d26dfb7b0e421b34f6a566719b7.scope: Deactivated successfully.
Oct 01 13:41:11 compute-0 sudo[271513]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:41:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:41:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:41:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:41:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:41:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:41:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:41:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 41aec1a5-43e8-479d-a1f3-46e65fe4dffc does not exist
Oct 01 13:41:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 18819c11-1d72-47eb-910f-9866454ad414 does not exist
Oct 01 13:41:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 54bda237-d917-449a-8cb3-f9d3382fb447 does not exist
Oct 01 13:41:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:41:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:41:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:41:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:41:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:41:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:41:11 compute-0 ceph-mon[74802]: pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 01 13:41:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:41:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:41:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:41:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:41:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:41:11 compute-0 sudo[273661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:11 compute-0 sudo[273661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:11 compute-0 sudo[273661]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:11 compute-0 sudo[273686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:41:11 compute-0 sudo[273686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:11 compute-0 sudo[273686]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:12 compute-0 sudo[273711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:12 compute-0 sudo[273711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:12 compute-0 sudo[273711]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:12 compute-0 sudo[273736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:41:12 compute-0 sudo[273736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:41:12.309 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:41:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:41:12.311 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:41:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:41:12.311 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:41:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:12 compute-0 podman[273803]: 2025-10-01 13:41:12.542355952 +0000 UTC m=+0.062723279 container create 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 13:41:12 compute-0 systemd[1]: Started libpod-conmon-967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b.scope.
Oct 01 13:41:12 compute-0 podman[273803]: 2025-10-01 13:41:12.510240959 +0000 UTC m=+0.030608386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:41:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:41:12 compute-0 podman[273803]: 2025-10-01 13:41:12.630378816 +0000 UTC m=+0.150746163 container init 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 13:41:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:41:12 compute-0 podman[273803]: 2025-10-01 13:41:12.639042791 +0000 UTC m=+0.159410148 container start 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:41:12 compute-0 podman[273803]: 2025-10-01 13:41:12.643120322 +0000 UTC m=+0.163487689 container attach 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:41:12 compute-0 pensive_brattain[273820]: 167 167
Oct 01 13:41:12 compute-0 systemd[1]: libpod-967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b.scope: Deactivated successfully.
Oct 01 13:41:12 compute-0 podman[273803]: 2025-10-01 13:41:12.647397167 +0000 UTC m=+0.167764534 container died 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct 01 13:41:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-806efc5dfc4bf1779384948f0d399d3e87c73196df95c30f57050b0593ba53c6-merged.mount: Deactivated successfully.
Oct 01 13:41:12 compute-0 podman[273803]: 2025-10-01 13:41:12.695895742 +0000 UTC m=+0.216263109 container remove 967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:41:12 compute-0 systemd[1]: libpod-conmon-967bd0eea032b68416e020dd7a4792c477e5eacd68d5ee06b6d85b194da9370b.scope: Deactivated successfully.
Oct 01 13:41:12 compute-0 podman[273844]: 2025-10-01 13:41:12.894676363 +0000 UTC m=+0.057874644 container create ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:41:12 compute-0 systemd[1]: Started libpod-conmon-ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581.scope.
Oct 01 13:41:12 compute-0 podman[273844]: 2025-10-01 13:41:12.867757896 +0000 UTC m=+0.030956247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:41:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:12 compute-0 podman[273844]: 2025-10-01 13:41:12.996313741 +0000 UTC m=+0.159512042 container init ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:41:13 compute-0 podman[273844]: 2025-10-01 13:41:13.003929173 +0000 UTC m=+0.167127484 container start ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:41:13 compute-0 podman[273844]: 2025-10-01 13:41:13.008671814 +0000 UTC m=+0.171870085 container attach ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:41:13 compute-0 ceph-mon[74802]: pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:41:14 compute-0 fervent_mirzakhani[273860]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:41:14 compute-0 fervent_mirzakhani[273860]: --> relative data size: 1.0
Oct 01 13:41:14 compute-0 fervent_mirzakhani[273860]: --> All data devices are unavailable
Oct 01 13:41:14 compute-0 systemd[1]: libpod-ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581.scope: Deactivated successfully.
Oct 01 13:41:14 compute-0 systemd[1]: libpod-ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581.scope: Consumed 1.110s CPU time.
Oct 01 13:41:14 compute-0 podman[273844]: 2025-10-01 13:41:14.154783416 +0000 UTC m=+1.317981717 container died ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 13:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c75ded890065853d5695e4f5f189b4e61f139a867df2f8b54a60fd2f2c3bc4-merged.mount: Deactivated successfully.
Oct 01 13:41:14 compute-0 podman[273844]: 2025-10-01 13:41:14.225298342 +0000 UTC m=+1.388496623 container remove ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mirzakhani, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:41:14 compute-0 systemd[1]: libpod-conmon-ce094527139e8c61a982052be66ef9623f35fb26c8376c7e99da1f844ad5e581.scope: Deactivated successfully.
Oct 01 13:41:14 compute-0 sudo[273736]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:14 compute-0 sudo[273903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:14 compute-0 sudo[273903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:14 compute-0 sudo[273903]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:14 compute-0 sudo[273928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:41:14 compute-0 sudo[273928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:14 compute-0 sudo[273928]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:14 compute-0 sudo[273953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:14 compute-0 sudo[273953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:14 compute-0 sudo[273953]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:41:14 compute-0 sudo[273978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:41:14 compute-0 sudo[273978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:15 compute-0 podman[274043]: 2025-10-01 13:41:15.055171802 +0000 UTC m=+0.057903374 container create 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:41:15 compute-0 systemd[1]: Started libpod-conmon-54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b.scope.
Oct 01 13:41:15 compute-0 podman[274043]: 2025-10-01 13:41:15.027433249 +0000 UTC m=+0.030164871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:41:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:41:15 compute-0 podman[274043]: 2025-10-01 13:41:15.150497739 +0000 UTC m=+0.153229341 container init 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:41:15 compute-0 podman[274043]: 2025-10-01 13:41:15.162191041 +0000 UTC m=+0.164922613 container start 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 13:41:15 compute-0 podman[274043]: 2025-10-01 13:41:15.166796818 +0000 UTC m=+0.169528380 container attach 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:41:15 compute-0 blissful_jepsen[274059]: 167 167
Oct 01 13:41:15 compute-0 systemd[1]: libpod-54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b.scope: Deactivated successfully.
Oct 01 13:41:15 compute-0 podman[274043]: 2025-10-01 13:41:15.170424103 +0000 UTC m=+0.173155695 container died 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:41:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4e35b69d0b62672e084c743300a0c5c0d31e17b93e1e20e4dcf60151b68a5f1-merged.mount: Deactivated successfully.
Oct 01 13:41:15 compute-0 podman[274043]: 2025-10-01 13:41:15.225161467 +0000 UTC m=+0.227893039 container remove 54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:41:15 compute-0 systemd[1]: libpod-conmon-54a26ed8f5b19b90eb98ba4443254ebb8807cad5b0d5af4f2dacc8dda9d1713b.scope: Deactivated successfully.
Oct 01 13:41:15 compute-0 podman[274084]: 2025-10-01 13:41:15.440539007 +0000 UTC m=+0.053321880 container create d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:41:15 compute-0 systemd[1]: Started libpod-conmon-d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421.scope.
Oct 01 13:41:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:15 compute-0 podman[274084]: 2025-10-01 13:41:15.418834085 +0000 UTC m=+0.031616968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:15 compute-0 podman[274084]: 2025-10-01 13:41:15.530824971 +0000 UTC m=+0.143607894 container init d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:41:15 compute-0 podman[274084]: 2025-10-01 13:41:15.547107881 +0000 UTC m=+0.159890764 container start d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:41:15 compute-0 podman[274084]: 2025-10-01 13:41:15.563670788 +0000 UTC m=+0.176453711 container attach d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:41:15 compute-0 ceph-mon[74802]: pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:41:16 compute-0 charming_wilbur[274101]: {
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:     "0": [
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:         {
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "devices": [
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "/dev/loop3"
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             ],
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_name": "ceph_lv0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_size": "21470642176",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "name": "ceph_lv0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "tags": {
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.cluster_name": "ceph",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.crush_device_class": "",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.encrypted": "0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.osd_id": "0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.type": "block",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.vdo": "0"
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             },
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "type": "block",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "vg_name": "ceph_vg0"
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:         }
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:     ],
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:     "1": [
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:         {
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "devices": [
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "/dev/loop4"
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             ],
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_name": "ceph_lv1",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_size": "21470642176",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "name": "ceph_lv1",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "tags": {
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.cluster_name": "ceph",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.crush_device_class": "",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.encrypted": "0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.osd_id": "1",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.type": "block",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.vdo": "0"
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             },
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "type": "block",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "vg_name": "ceph_vg1"
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:         }
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:     ],
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:     "2": [
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:         {
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "devices": [
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "/dev/loop5"
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             ],
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_name": "ceph_lv2",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_size": "21470642176",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "name": "ceph_lv2",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "tags": {
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.cluster_name": "ceph",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.crush_device_class": "",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.encrypted": "0",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.osd_id": "2",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.type": "block",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:                 "ceph.vdo": "0"
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             },
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "type": "block",
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:             "vg_name": "ceph_vg2"
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:         }
Oct 01 13:41:16 compute-0 charming_wilbur[274101]:     ]
Oct 01 13:41:16 compute-0 charming_wilbur[274101]: }
Oct 01 13:41:16 compute-0 systemd[1]: libpod-d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421.scope: Deactivated successfully.
Oct 01 13:41:16 compute-0 podman[274084]: 2025-10-01 13:41:16.474493686 +0000 UTC m=+1.087276589 container died d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7f7dd2ba62cf58364c5916c51953c9cee23f8a5556c0c1b605f83560307cd1a-merged.mount: Deactivated successfully.
Oct 01 13:41:16 compute-0 podman[274084]: 2025-10-01 13:41:16.529941233 +0000 UTC m=+1.142724076 container remove d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:41:16 compute-0 systemd[1]: libpod-conmon-d9f4470199c09aca864fc948dad0474c81b2d66af3be27a10769035d1893e421.scope: Deactivated successfully.
Oct 01 13:41:16 compute-0 sudo[273978]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:41:16 compute-0 podman[274127]: 2025-10-01 13:41:16.661286176 +0000 UTC m=+0.073926726 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd)
Oct 01 13:41:16 compute-0 podman[274125]: 2025-10-01 13:41:16.671746919 +0000 UTC m=+0.095064919 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:41:16 compute-0 sudo[274146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:16 compute-0 sudo[274146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:16 compute-0 sudo[274146]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:16 compute-0 sudo[274201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:41:16 compute-0 sudo[274201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:16 compute-0 sudo[274201]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:16 compute-0 podman[274189]: 2025-10-01 13:41:16.772851929 +0000 UTC m=+0.071626773 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 01 13:41:16 compute-0 podman[274188]: 2025-10-01 13:41:16.830220146 +0000 UTC m=+0.137149929 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250923)
Oct 01 13:41:16 compute-0 sudo[274256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:16 compute-0 sudo[274256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:16 compute-0 sudo[274256]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:16 compute-0 sudo[274284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:41:16 compute-0 sudo[274284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:17 compute-0 podman[274349]: 2025-10-01 13:41:17.309361806 +0000 UTC m=+0.057499492 container create 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:41:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:17 compute-0 systemd[1]: Started libpod-conmon-7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348.scope.
Oct 01 13:41:17 compute-0 podman[274349]: 2025-10-01 13:41:17.28686379 +0000 UTC m=+0.035001566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:41:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:41:17 compute-0 podman[274349]: 2025-10-01 13:41:17.445686379 +0000 UTC m=+0.193824145 container init 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:41:17 compute-0 podman[274349]: 2025-10-01 13:41:17.458492326 +0000 UTC m=+0.206630032 container start 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:41:17 compute-0 podman[274349]: 2025-10-01 13:41:17.463051381 +0000 UTC m=+0.211189147 container attach 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 13:41:17 compute-0 stupefied_jemison[274366]: 167 167
Oct 01 13:41:17 compute-0 systemd[1]: libpod-7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348.scope: Deactivated successfully.
Oct 01 13:41:17 compute-0 conmon[274366]: conmon 7dac978be191e49b9120 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348.scope/container/memory.events
Oct 01 13:41:17 compute-0 podman[274349]: 2025-10-01 13:41:17.469210487 +0000 UTC m=+0.217348233 container died 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:41:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed1d2a9b95535c15b5330ead9cc75967cfb42ccf3d84dd112d9fb1d57df1b71d-merged.mount: Deactivated successfully.
Oct 01 13:41:17 compute-0 podman[274349]: 2025-10-01 13:41:17.555976131 +0000 UTC m=+0.304113847 container remove 7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:41:17 compute-0 systemd[1]: libpod-conmon-7dac978be191e49b9120a391c02dd08cdc924d49b09ba6dbebaea2a631b29348.scope: Deactivated successfully.
Oct 01 13:41:17 compute-0 podman[274389]: 2025-10-01 13:41:17.814092222 +0000 UTC m=+0.063947848 container create 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:41:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:41:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:41:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:41:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:41:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:41:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:41:17 compute-0 systemd[1]: Started libpod-conmon-4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c.scope.
Oct 01 13:41:17 compute-0 podman[274389]: 2025-10-01 13:41:17.789447257 +0000 UTC m=+0.039302873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:41:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:41:17 compute-0 ceph-mon[74802]: pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:41:17 compute-0 podman[274389]: 2025-10-01 13:41:17.915999658 +0000 UTC m=+0.165855314 container init 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:41:17 compute-0 podman[274389]: 2025-10-01 13:41:17.930664564 +0000 UTC m=+0.180520190 container start 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:41:17 compute-0 podman[274389]: 2025-10-01 13:41:17.934553188 +0000 UTC m=+0.184408814 container attach 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:41:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:41:19 compute-0 kind_satoshi[274406]: {
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "osd_id": 0,
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "type": "bluestore"
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:     },
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "osd_id": 2,
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "type": "bluestore"
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:     },
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "osd_id": 1,
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:         "type": "bluestore"
Oct 01 13:41:19 compute-0 kind_satoshi[274406]:     }
Oct 01 13:41:19 compute-0 kind_satoshi[274406]: }
Oct 01 13:41:19 compute-0 systemd[1]: libpod-4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c.scope: Deactivated successfully.
Oct 01 13:41:19 compute-0 systemd[1]: libpod-4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c.scope: Consumed 1.125s CPU time.
Oct 01 13:41:19 compute-0 podman[274439]: 2025-10-01 13:41:19.102712033 +0000 UTC m=+0.038244760 container died 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-601897ff82c3bb32557cc7dc4cba243033ffd332fdbdc9773a4b1a2c9d5e6111-merged.mount: Deactivated successfully.
Oct 01 13:41:19 compute-0 podman[274439]: 2025-10-01 13:41:19.185480759 +0000 UTC m=+0.121013436 container remove 4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:41:19 compute-0 systemd[1]: libpod-conmon-4f0e70520efacce56e1757e49efcef209ebc157d7b0264e3078cf32485134a9c.scope: Deactivated successfully.
Oct 01 13:41:19 compute-0 sudo[274284]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:41:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:41:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 84a3e75f-54fc-48d1-baa0-dfd2083e3188 does not exist
Oct 01 13:41:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7a4718d2-02ba-4cb0-a94b-c99ba93ef0e9 does not exist
Oct 01 13:41:19 compute-0 sudo[274454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:41:19 compute-0 sudo[274454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:19 compute-0 sudo[274454]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:19 compute-0 sudo[274479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:41:19 compute-0 sudo[274479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:41:19 compute-0 sudo[274479]: pam_unix(sudo:session): session closed for user root
Oct 01 13:41:19 compute-0 ceph-mon[74802]: pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:41:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:41:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:41:22 compute-0 ceph-mon[74802]: pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:41:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:41:24 compute-0 ceph-mon[74802]: pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:41:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:26 compute-0 ceph-mon[74802]: pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:28 compute-0 ceph-mon[74802]: pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:30 compute-0 ceph-mon[74802]: pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:32 compute-0 ceph-mon[74802]: pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:34 compute-0 ceph-mon[74802]: pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:35 compute-0 ceph-mon[74802]: pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:37 compute-0 ceph-mon[74802]: pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:39 compute-0 ceph-mon[74802]: pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:41 compute-0 ceph-mon[74802]: pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:43 compute-0 ceph-mon[74802]: pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:45 compute-0 ceph-mon[74802]: pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:47 compute-0 podman[274507]: 2025-10-01 13:41:47.533114296 +0000 UTC m=+0.078452820 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:41:47 compute-0 podman[274506]: 2025-10-01 13:41:47.538656292 +0000 UTC m=+0.086407823 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Oct 01 13:41:47 compute-0 podman[274508]: 2025-10-01 13:41:47.551847923 +0000 UTC m=+0.091384062 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923)
Oct 01 13:41:47 compute-0 podman[274505]: 2025-10-01 13:41:47.56559249 +0000 UTC m=+0.111208403 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, managed_by=edpm_ansible)
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:41:47
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'vms']
Oct 01 13:41:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:41:47 compute-0 ceph-mon[74802]: pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:41:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:49 compute-0 unix_chkpwd[274587]: password check failed for user (root)
Oct 01 13:41:49 compute-0 sshd-session[274585]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144  user=root
Oct 01 13:41:49 compute-0 ceph-mon[74802]: pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:51 compute-0 sshd-session[274585]: Failed password for root from 27.254.137.144 port 33288 ssh2
Oct 01 13:41:52 compute-0 ceph-mon[74802]: pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:53 compute-0 sshd-session[274585]: Received disconnect from 27.254.137.144 port 33288:11: Bye Bye [preauth]
Oct 01 13:41:53 compute-0 sshd-session[274585]: Disconnected from authenticating user root 27.254.137.144 port 33288 [preauth]
Oct 01 13:41:54 compute-0 ceph-mon[74802]: pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:41:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1595201962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:41:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:41:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1595201962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:41:56 compute-0 ceph-mon[74802]: pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1595201962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:41:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1595201962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:41:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:41:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:41:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:41:58 compute-0 ceph-mon[74802]: pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:41:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:00 compute-0 ceph-mon[74802]: pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:00 compute-0 nova_compute[260022]: 2025-10-01 13:42:00.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:00 compute-0 nova_compute[260022]: 2025-10-01 13:42:00.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:00 compute-0 nova_compute[260022]: 2025-10-01 13:42:00.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:42:00 compute-0 nova_compute[260022]: 2025-10-01 13:42:00.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:42:00 compute-0 nova_compute[260022]: 2025-10-01 13:42:00.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:42:00 compute-0 nova_compute[260022]: 2025-10-01 13:42:00.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:42:00 compute-0 nova_compute[260022]: 2025-10-01 13:42:00.376 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:42:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:42:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2084272842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:42:00 compute-0 nova_compute[260022]: 2025-10-01 13:42:00.856 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.026 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.028 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.028 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.028 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:42:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2084272842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.096 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.096 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.140 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:42:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:42:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/901455468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.579 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.585 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.599 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.600 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:42:01 compute-0 nova_compute[260022]: 2025-10-01 13:42:01.600 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:42:02 compute-0 ceph-mon[74802]: pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:02 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/901455468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:42:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct 01 13:42:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct 01 13:42:04 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct 01 13:42:04 compute-0 ceph-mon[74802]: pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:04 compute-0 nova_compute[260022]: 2025-10-01 13:42:04.596 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:04 compute-0 nova_compute[260022]: 2025-10-01 13:42:04.597 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:04 compute-0 nova_compute[260022]: 2025-10-01 13:42:04.597 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:42:04 compute-0 nova_compute[260022]: 2025-10-01 13:42:04.597 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:42:04 compute-0 nova_compute[260022]: 2025-10-01 13:42:04.612 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:42:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:05 compute-0 ceph-mon[74802]: osdmap e145: 3 total, 3 up, 3 in
Oct 01 13:42:05 compute-0 nova_compute[260022]: 2025-10-01 13:42:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:05 compute-0 nova_compute[260022]: 2025-10-01 13:42:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:05 compute-0 nova_compute[260022]: 2025-10-01 13:42:05.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:42:06 compute-0 ceph-mon[74802]: pgmap v1134: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct 01 13:42:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct 01 13:42:06 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.321778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126321852, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2109, "num_deletes": 254, "total_data_size": 3510436, "memory_usage": 3582264, "flush_reason": "Manual Compaction"}
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct 01 13:42:06 compute-0 nova_compute[260022]: 2025-10-01 13:42:06.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126349109, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3431667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21043, "largest_seqno": 23151, "table_properties": {"data_size": 3421973, "index_size": 6188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19312, "raw_average_key_size": 20, "raw_value_size": 3402709, "raw_average_value_size": 3563, "num_data_blocks": 279, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759325907, "oldest_key_time": 1759325907, "file_creation_time": 1759326126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 28193 microseconds, and 14147 cpu microseconds.
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.349959) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3431667 bytes OK
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.350275) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.352509) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.352557) EVENT_LOG_v1 {"time_micros": 1759326126352547, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.352601) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3501587, prev total WAL file size 3501587, number of live WAL files 2.
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.355694) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3351KB)], [50(7611KB)]
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126355817, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11225646, "oldest_snapshot_seqno": -1}
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4785 keys, 9455610 bytes, temperature: kUnknown
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126430907, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9455610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9420384, "index_size": 22188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 117220, "raw_average_key_size": 24, "raw_value_size": 9330638, "raw_average_value_size": 1949, "num_data_blocks": 932, "num_entries": 4785, "num_filter_entries": 4785, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.431258) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9455610 bytes
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.432761) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.3 rd, 125.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5306, records dropped: 521 output_compression: NoCompression
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.432791) EVENT_LOG_v1 {"time_micros": 1759326126432775, "job": 26, "event": "compaction_finished", "compaction_time_micros": 75185, "compaction_time_cpu_micros": 40494, "output_level": 6, "num_output_files": 1, "total_output_size": 9455610, "num_input_records": 5306, "num_output_records": 4785, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126434238, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326126437200, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.355556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:42:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:42:06.437299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:42:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:07 compute-0 ceph-mon[74802]: osdmap e146: 3 total, 3 up, 3 in
Oct 01 13:42:07 compute-0 ceph-mon[74802]: pgmap v1136: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:07 compute-0 nova_compute[260022]: 2025-10-01 13:42:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 13:42:09 compute-0 ceph-mon[74802]: pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 13:42:10 compute-0 nova_compute[260022]: 2025-10-01 13:42:10.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 13:42:11 compute-0 ceph-mon[74802]: pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 13:42:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:42:12.310 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:42:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:42:12.311 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:42:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:42:12.311 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:42:12 compute-0 nova_compute[260022]: 2025-10-01 13:42:12.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:42:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.8 MiB/s wr, 44 op/s
Oct 01 13:42:14 compute-0 ceph-mon[74802]: pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.8 MiB/s wr, 44 op/s
Oct 01 13:42:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 01 13:42:16 compute-0 ceph-mon[74802]: pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 01 13:42:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 MiB/s wr, 36 op/s
Oct 01 13:42:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:42:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:42:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:42:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:42:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:42:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:42:18 compute-0 ceph-mon[74802]: pgmap v1141: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 MiB/s wr, 36 op/s
Oct 01 13:42:18 compute-0 podman[274634]: 2025-10-01 13:42:18.520794692 +0000 UTC m=+0.067745769 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:42:18 compute-0 podman[274640]: 2025-10-01 13:42:18.526175523 +0000 UTC m=+0.066712435 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:42:18 compute-0 podman[274633]: 2025-10-01 13:42:18.54241475 +0000 UTC m=+0.087158677 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:42:18 compute-0 podman[274632]: 2025-10-01 13:42:18.543900107 +0000 UTC m=+0.100353277 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 01 13:42:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Oct 01 13:42:19 compute-0 sudo[274712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:19 compute-0 sudo[274712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:19 compute-0 sudo[274712]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:19 compute-0 sudo[274737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:42:19 compute-0 sudo[274737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:19 compute-0 sudo[274737]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:19 compute-0 sudo[274762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:19 compute-0 sudo[274762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:19 compute-0 sudo[274762]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:19 compute-0 sudo[274787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 13:42:19 compute-0 sudo[274787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:20 compute-0 ceph-mon[74802]: pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Oct 01 13:42:20 compute-0 sudo[274787]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:42:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:42:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:20 compute-0 sudo[274831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:20 compute-0 sudo[274831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:20 compute-0 sudo[274831]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:20 compute-0 sudo[274856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:42:20 compute-0 sudo[274856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:20 compute-0 sudo[274856]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:20 compute-0 sudo[274881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:20 compute-0 sudo[274881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:20 compute-0 sudo[274881]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:20 compute-0 sudo[274906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:42:20 compute-0 sudo[274906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:21 compute-0 sudo[274906]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 13:42:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:42:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:42:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:42:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:42:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:42:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:21 compute-0 ceph-mon[74802]: pgmap v1143: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:42:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:42:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:42:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:42:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:21 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 1a92c012-86e5-4015-8235-c6f6b6199707 does not exist
Oct 01 13:42:21 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2b564483-70fb-45e0-a3bc-c84c5115da4f does not exist
Oct 01 13:42:21 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 16d182fc-feb6-4909-8b5b-971e2f5a0473 does not exist
Oct 01 13:42:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:42:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:42:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:42:21 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:42:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:42:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:42:21 compute-0 sudo[274962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:21 compute-0 sudo[274962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:21 compute-0 sudo[274962]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:21 compute-0 sudo[274987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:42:21 compute-0 sudo[274987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:21 compute-0 sudo[274987]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:21 compute-0 sudo[275012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:21 compute-0 sudo[275012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:21 compute-0 sudo[275012]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:21 compute-0 sudo[275037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:42:21 compute-0 sudo[275037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:21 compute-0 podman[275103]: 2025-10-01 13:42:21.956273928 +0000 UTC m=+0.083233162 container create 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:42:22 compute-0 podman[275103]: 2025-10-01 13:42:21.906450951 +0000 UTC m=+0.033410225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:42:22 compute-0 systemd[1]: Started libpod-conmon-7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1.scope.
Oct 01 13:42:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:42:22 compute-0 podman[275103]: 2025-10-01 13:42:22.063862005 +0000 UTC m=+0.190821279 container init 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:42:22 compute-0 podman[275103]: 2025-10-01 13:42:22.072284543 +0000 UTC m=+0.199243787 container start 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 13:42:22 compute-0 podman[275103]: 2025-10-01 13:42:22.075924899 +0000 UTC m=+0.202884163 container attach 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:42:22 compute-0 wonderful_proskuriakova[275119]: 167 167
Oct 01 13:42:22 compute-0 systemd[1]: libpod-7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1.scope: Deactivated successfully.
Oct 01 13:42:22 compute-0 conmon[275119]: conmon 7f3923e0f525ce828cd4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1.scope/container/memory.events
Oct 01 13:42:22 compute-0 podman[275103]: 2025-10-01 13:42:22.081092044 +0000 UTC m=+0.208051318 container died 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e738eebbee6880ebf78fd4f8de856201171cc03857962d9bdcf63d489a3e14b-merged.mount: Deactivated successfully.
Oct 01 13:42:22 compute-0 podman[275103]: 2025-10-01 13:42:22.142804729 +0000 UTC m=+0.269764003 container remove 7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:42:22 compute-0 systemd[1]: libpod-conmon-7f3923e0f525ce828cd4bd89921f29d2cd10a155fbe1cc0208bbf67af28c5cb1.scope: Deactivated successfully.
Oct 01 13:42:22 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:22 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:42:22 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:42:22 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:42:22 compute-0 podman[275145]: 2025-10-01 13:42:22.344854115 +0000 UTC m=+0.067858013 container create 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:42:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:22 compute-0 podman[275145]: 2025-10-01 13:42:22.308553688 +0000 UTC m=+0.031557646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:42:22 compute-0 systemd[1]: Started libpod-conmon-607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498.scope.
Oct 01 13:42:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:22 compute-0 podman[275145]: 2025-10-01 13:42:22.455498118 +0000 UTC m=+0.178502036 container init 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:42:22 compute-0 podman[275145]: 2025-10-01 13:42:22.467937895 +0000 UTC m=+0.190941763 container start 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:42:22 compute-0 podman[275145]: 2025-10-01 13:42:22.509995374 +0000 UTC m=+0.232999252 container attach 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:42:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:23 compute-0 ceph-mon[74802]: pgmap v1144: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:23 compute-0 crazy_leakey[275162]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:42:23 compute-0 crazy_leakey[275162]: --> relative data size: 1.0
Oct 01 13:42:23 compute-0 crazy_leakey[275162]: --> All data devices are unavailable
Oct 01 13:42:23 compute-0 systemd[1]: libpod-607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498.scope: Deactivated successfully.
Oct 01 13:42:23 compute-0 systemd[1]: libpod-607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498.scope: Consumed 1.083s CPU time.
Oct 01 13:42:23 compute-0 podman[275145]: 2025-10-01 13:42:23.604287426 +0000 UTC m=+1.327291324 container died 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4c732013a2bc6a477d85432fac8a31b0bf4166506be08b83e97d38af7959300-merged.mount: Deactivated successfully.
Oct 01 13:42:23 compute-0 podman[275145]: 2025-10-01 13:42:23.824963434 +0000 UTC m=+1.547967322 container remove 607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:42:23 compute-0 systemd[1]: libpod-conmon-607e130876515a2b9a0270707a589d21f651de63dec5a0fb8dc0ee16c7fd2498.scope: Deactivated successfully.
Oct 01 13:42:23 compute-0 sudo[275037]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:23 compute-0 sudo[275206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:23 compute-0 sudo[275206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:23 compute-0 sudo[275206]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:24 compute-0 sudo[275231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:42:24 compute-0 sudo[275231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:24 compute-0 sudo[275231]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:24 compute-0 sudo[275256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:24 compute-0 sudo[275256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:24 compute-0 sudo[275256]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:24 compute-0 sudo[275281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:42:24 compute-0 sudo[275281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:24 compute-0 podman[275348]: 2025-10-01 13:42:24.620009976 +0000 UTC m=+0.053844696 container create a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:42:24 compute-0 systemd[1]: Started libpod-conmon-a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272.scope.
Oct 01 13:42:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:42:24 compute-0 podman[275348]: 2025-10-01 13:42:24.598060487 +0000 UTC m=+0.031895197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:42:24 compute-0 podman[275348]: 2025-10-01 13:42:24.712289735 +0000 UTC m=+0.146124495 container init a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:42:24 compute-0 podman[275348]: 2025-10-01 13:42:24.724401001 +0000 UTC m=+0.158235721 container start a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:42:24 compute-0 podman[275348]: 2025-10-01 13:42:24.729457652 +0000 UTC m=+0.163292382 container attach a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 13:42:24 compute-0 youthful_ardinghelli[275364]: 167 167
Oct 01 13:42:24 compute-0 systemd[1]: libpod-a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272.scope: Deactivated successfully.
Oct 01 13:42:24 compute-0 podman[275348]: 2025-10-01 13:42:24.731005401 +0000 UTC m=+0.164840121 container died a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-21f80649092122b77b656f0f7e7b5bb0537c15af8227f7ff8eaf88fe2f99db95-merged.mount: Deactivated successfully.
Oct 01 13:42:24 compute-0 podman[275348]: 2025-10-01 13:42:24.776425128 +0000 UTC m=+0.210259808 container remove a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:42:24 compute-0 systemd[1]: libpod-conmon-a685c7b5ab19b30d9db56d42ca59f3ec7bbb491c3d096a429532a9ca7cbe6272.scope: Deactivated successfully.
Oct 01 13:42:25 compute-0 podman[275387]: 2025-10-01 13:42:25.015240554 +0000 UTC m=+0.072786079 container create d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:42:25 compute-0 systemd[1]: Started libpod-conmon-d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5.scope.
Oct 01 13:42:25 compute-0 podman[275387]: 2025-10-01 13:42:24.985883378 +0000 UTC m=+0.043428953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:42:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:25 compute-0 podman[275387]: 2025-10-01 13:42:25.13162501 +0000 UTC m=+0.189170545 container init d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:42:25 compute-0 podman[275387]: 2025-10-01 13:42:25.14324018 +0000 UTC m=+0.200785735 container start d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:42:25 compute-0 podman[275387]: 2025-10-01 13:42:25.148403675 +0000 UTC m=+0.205949210 container attach d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:42:25 compute-0 ceph-mon[74802]: pgmap v1145: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:25 compute-0 sharp_cray[275403]: {
Oct 01 13:42:25 compute-0 sharp_cray[275403]:     "0": [
Oct 01 13:42:25 compute-0 sharp_cray[275403]:         {
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "devices": [
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "/dev/loop3"
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             ],
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_name": "ceph_lv0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_size": "21470642176",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "name": "ceph_lv0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "tags": {
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.cluster_name": "ceph",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.crush_device_class": "",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.encrypted": "0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.osd_id": "0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.type": "block",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.vdo": "0"
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             },
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "type": "block",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "vg_name": "ceph_vg0"
Oct 01 13:42:25 compute-0 sharp_cray[275403]:         }
Oct 01 13:42:25 compute-0 sharp_cray[275403]:     ],
Oct 01 13:42:25 compute-0 sharp_cray[275403]:     "1": [
Oct 01 13:42:25 compute-0 sharp_cray[275403]:         {
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "devices": [
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "/dev/loop4"
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             ],
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_name": "ceph_lv1",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_size": "21470642176",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "name": "ceph_lv1",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "tags": {
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.cluster_name": "ceph",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.crush_device_class": "",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.encrypted": "0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.osd_id": "1",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.type": "block",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.vdo": "0"
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             },
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "type": "block",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "vg_name": "ceph_vg1"
Oct 01 13:42:25 compute-0 sharp_cray[275403]:         }
Oct 01 13:42:25 compute-0 sharp_cray[275403]:     ],
Oct 01 13:42:25 compute-0 sharp_cray[275403]:     "2": [
Oct 01 13:42:25 compute-0 sharp_cray[275403]:         {
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "devices": [
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "/dev/loop5"
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             ],
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_name": "ceph_lv2",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_size": "21470642176",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "name": "ceph_lv2",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "tags": {
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.cluster_name": "ceph",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.crush_device_class": "",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.encrypted": "0",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.osd_id": "2",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.type": "block",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:                 "ceph.vdo": "0"
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             },
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "type": "block",
Oct 01 13:42:25 compute-0 sharp_cray[275403]:             "vg_name": "ceph_vg2"
Oct 01 13:42:25 compute-0 sharp_cray[275403]:         }
Oct 01 13:42:25 compute-0 sharp_cray[275403]:     ]
Oct 01 13:42:25 compute-0 sharp_cray[275403]: }
Oct 01 13:42:25 compute-0 systemd[1]: libpod-d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5.scope: Deactivated successfully.
Oct 01 13:42:25 compute-0 conmon[275403]: conmon d4dc2cbdcbaeec3557a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5.scope/container/memory.events
Oct 01 13:42:25 compute-0 podman[275387]: 2025-10-01 13:42:25.88924219 +0000 UTC m=+0.946787715 container died d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 13:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe1c914f804d32e5edbdf5aa022aea0b9dd2e96b1f8fbaefb8259112fbe2bf86-merged.mount: Deactivated successfully.
Oct 01 13:42:25 compute-0 podman[275387]: 2025-10-01 13:42:25.969274289 +0000 UTC m=+1.026819804 container remove d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:42:25 compute-0 systemd[1]: libpod-conmon-d4dc2cbdcbaeec3557a931e3e26ec77b1a596086214f641c067660725caa13b5.scope: Deactivated successfully.
Oct 01 13:42:26 compute-0 sudo[275281]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:26 compute-0 sudo[275424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:26 compute-0 sudo[275424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:26 compute-0 sudo[275424]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:26 compute-0 sudo[275449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:42:26 compute-0 sudo[275449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:26 compute-0 sudo[275449]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:26 compute-0 sudo[275474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:26 compute-0 sudo[275474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:26 compute-0 sudo[275474]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:26 compute-0 sudo[275499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:42:26 compute-0 sudo[275499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:26 compute-0 podman[275565]: 2025-10-01 13:42:26.785319289 +0000 UTC m=+0.049883010 container create 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:42:26 compute-0 systemd[1]: Started libpod-conmon-1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507.scope.
Oct 01 13:42:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:42:26 compute-0 podman[275565]: 2025-10-01 13:42:26.764858788 +0000 UTC m=+0.029422529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:42:26 compute-0 podman[275565]: 2025-10-01 13:42:26.876755881 +0000 UTC m=+0.141319592 container init 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:42:26 compute-0 podman[275565]: 2025-10-01 13:42:26.885710537 +0000 UTC m=+0.150274228 container start 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:42:26 compute-0 podman[275565]: 2025-10-01 13:42:26.888902198 +0000 UTC m=+0.153465909 container attach 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 13:42:26 compute-0 reverent_volhard[275582]: 167 167
Oct 01 13:42:26 compute-0 systemd[1]: libpod-1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507.scope: Deactivated successfully.
Oct 01 13:42:26 compute-0 podman[275565]: 2025-10-01 13:42:26.891930564 +0000 UTC m=+0.156494255 container died 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:42:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-eec971cfc5d2df23cd8ab9f1e11a585f7f9af8a015cad5ee936e392497c9ae13-merged.mount: Deactivated successfully.
Oct 01 13:42:27 compute-0 podman[275565]: 2025-10-01 13:42:27.080999806 +0000 UTC m=+0.345563497 container remove 1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_volhard, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 13:42:27 compute-0 systemd[1]: libpod-conmon-1bc11059993cee33d8b33b7b86f237c150c89bde012ef30ff3f8000cb55cc507.scope: Deactivated successfully.
Oct 01 13:42:27 compute-0 podman[275606]: 2025-10-01 13:42:27.365690963 +0000 UTC m=+0.078094618 container create b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:42:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:27 compute-0 podman[275606]: 2025-10-01 13:42:27.316264789 +0000 UTC m=+0.028668544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:42:27 compute-0 systemd[1]: Started libpod-conmon-b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53.scope.
Oct 01 13:42:27 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:42:27 compute-0 podman[275606]: 2025-10-01 13:42:27.636434677 +0000 UTC m=+0.348838402 container init b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:42:27 compute-0 podman[275606]: 2025-10-01 13:42:27.648255482 +0000 UTC m=+0.360659177 container start b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:42:27 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:42:27.672 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:42:27 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:42:27.675 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:42:27 compute-0 podman[275606]: 2025-10-01 13:42:27.716772495 +0000 UTC m=+0.429176190 container attach b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:42:27 compute-0 ceph-mon[74802]: pgmap v1146: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:28 compute-0 festive_neumann[275622]: {
Oct 01 13:42:28 compute-0 festive_neumann[275622]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "osd_id": 0,
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "type": "bluestore"
Oct 01 13:42:28 compute-0 festive_neumann[275622]:     },
Oct 01 13:42:28 compute-0 festive_neumann[275622]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "osd_id": 2,
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "type": "bluestore"
Oct 01 13:42:28 compute-0 festive_neumann[275622]:     },
Oct 01 13:42:28 compute-0 festive_neumann[275622]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "osd_id": 1,
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:42:28 compute-0 festive_neumann[275622]:         "type": "bluestore"
Oct 01 13:42:28 compute-0 festive_neumann[275622]:     }
Oct 01 13:42:28 compute-0 festive_neumann[275622]: }
Oct 01 13:42:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:28 compute-0 systemd[1]: libpod-b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53.scope: Deactivated successfully.
Oct 01 13:42:28 compute-0 podman[275606]: 2025-10-01 13:42:28.68334144 +0000 UTC m=+1.395745125 container died b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct 01 13:42:28 compute-0 systemd[1]: libpod-b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53.scope: Consumed 1.044s CPU time.
Oct 01 13:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdfbfd51524878d9d6ec0e8512cd1d6ea0980634459c905fb8aa6d53c4c6a4c2-merged.mount: Deactivated successfully.
Oct 01 13:42:28 compute-0 podman[275606]: 2025-10-01 13:42:28.758172073 +0000 UTC m=+1.470575738 container remove b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_neumann, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:42:28 compute-0 systemd[1]: libpod-conmon-b261b07ffd71cab9291ae2267976e0183ab86fe70e98d4d4edee3dc7c9829d53.scope: Deactivated successfully.
Oct 01 13:42:28 compute-0 sudo[275499]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:42:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:42:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:28 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c4c39d92-9e3b-438e-8e18-baeaf5d2bcc4 does not exist
Oct 01 13:42:28 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e1dfacae-a324-4a9a-b982-7b0d8dff391d does not exist
Oct 01 13:42:28 compute-0 sudo[275668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:42:28 compute-0 sudo[275668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:28 compute-0 sudo[275668]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:28 compute-0 sudo[275693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:42:28 compute-0 sudo[275693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:42:29 compute-0 sudo[275693]: pam_unix(sudo:session): session closed for user root
Oct 01 13:42:29 compute-0 ceph-mon[74802]: pgmap v1147: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:29 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:29 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:42:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:31 compute-0 ceph-mon[74802]: pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:33 compute-0 ceph-mon[74802]: pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:34 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:42:34.677 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:42:36 compute-0 ceph-mon[74802]: pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:38 compute-0 ceph-mon[74802]: pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:40 compute-0 ceph-mon[74802]: pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:42 compute-0 ceph-mon[74802]: pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:44 compute-0 ceph-mon[74802]: pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:46 compute-0 ceph-mon[74802]: pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:47 compute-0 ceph-mon[74802]: pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:42:47
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images']
Oct 01 13:42:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:42:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:49 compute-0 podman[275721]: 2025-10-01 13:42:49.552296127 +0000 UTC m=+0.085057036 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:42:49 compute-0 podman[275720]: 2025-10-01 13:42:49.562999345 +0000 UTC m=+0.095285379 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923)
Oct 01 13:42:49 compute-0 podman[275719]: 2025-10-01 13:42:49.573520588 +0000 UTC m=+0.112904006 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:42:49 compute-0 podman[275718]: 2025-10-01 13:42:49.585874337 +0000 UTC m=+0.124691647 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller)
Oct 01 13:42:49 compute-0 ceph-mon[74802]: pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:51 compute-0 ceph-mon[74802]: pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:53 compute-0 ceph-mon[74802]: pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:42:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722530960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:42:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:42:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/722530960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:42:55 compute-0 ceph-mon[74802]: pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/722530960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:42:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/722530960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:42:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:42:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:42:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:42:57 compute-0 ceph-mon[74802]: pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:42:59 compute-0 ceph-mon[74802]: pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:01 compute-0 nova_compute[260022]: 2025-10-01 13:43:01.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:43:01 compute-0 nova_compute[260022]: 2025-10-01 13:43:01.412 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:43:01 compute-0 nova_compute[260022]: 2025-10-01 13:43:01.413 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:43:01 compute-0 nova_compute[260022]: 2025-10-01 13:43:01.413 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:43:01 compute-0 nova_compute[260022]: 2025-10-01 13:43:01.413 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:43:01 compute-0 nova_compute[260022]: 2025-10-01 13:43:01.414 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:43:01 compute-0 ceph-mon[74802]: pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:43:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285082951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:43:01 compute-0 nova_compute[260022]: 2025-10-01 13:43:01.838 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.012 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.013 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.014 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.014 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.080 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.080 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.097 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:43:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:43:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1282990422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.537 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.543 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.559 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.561 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:43:02 compute-0 nova_compute[260022]: 2025-10-01 13:43:02.562 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:43:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:02 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2285082951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:43:02 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1282990422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:43:03 compute-0 nova_compute[260022]: 2025-10-01 13:43:03.564 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:43:03 compute-0 nova_compute[260022]: 2025-10-01 13:43:03.564 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:43:03 compute-0 ceph-mon[74802]: pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:05 compute-0 nova_compute[260022]: 2025-10-01 13:43:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:43:05 compute-0 nova_compute[260022]: 2025-10-01 13:43:05.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:43:05 compute-0 nova_compute[260022]: 2025-10-01 13:43:05.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:43:05 compute-0 nova_compute[260022]: 2025-10-01 13:43:05.371 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:43:05 compute-0 nova_compute[260022]: 2025-10-01 13:43:05.371 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:43:05 compute-0 nova_compute[260022]: 2025-10-01 13:43:05.371 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:43:05 compute-0 ceph-mon[74802]: pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:06 compute-0 nova_compute[260022]: 2025-10-01 13:43:06.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:43:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:07 compute-0 ceph-mon[74802]: pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:08 compute-0 nova_compute[260022]: 2025-10-01 13:43:08.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:43:08 compute-0 nova_compute[260022]: 2025-10-01 13:43:08.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:43:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:09 compute-0 ceph-mon[74802]: pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:10 compute-0 nova_compute[260022]: 2025-10-01 13:43:10.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:43:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:11 compute-0 ceph-mon[74802]: pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:43:12.312 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:43:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:43:12.312 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:43:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:43:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:43:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:13 compute-0 ceph-mon[74802]: pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:15 compute-0 ceph-mon[74802]: pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:43:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:43:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:43:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:43:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:43:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:43:17 compute-0 ceph-mon[74802]: pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:19 compute-0 sshd-session[275845]: Connection closed by 27.254.137.144 port 57118 [preauth]
Oct 01 13:43:19 compute-0 ceph-mon[74802]: pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:20 compute-0 podman[275848]: 2025-10-01 13:43:20.564863461 +0000 UTC m=+0.106472282 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:43:20 compute-0 podman[275850]: 2025-10-01 13:43:20.571573122 +0000 UTC m=+0.095790635 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:43:20 compute-0 podman[275847]: 2025-10-01 13:43:20.59938318 +0000 UTC m=+0.141393504 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20250923)
Oct 01 13:43:20 compute-0 podman[275849]: 2025-10-01 13:43:20.603678086 +0000 UTC m=+0.135692415 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:43:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:21 compute-0 ceph-mon[74802]: pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:23 compute-0 ceph-mon[74802]: pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:25 compute-0 ceph-mon[74802]: pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:27 compute-0 ceph-mon[74802]: pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:28 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:43:28.583 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:43:28 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:43:28.584 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:43:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:29 compute-0 sudo[275924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:29 compute-0 sudo[275924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:29 compute-0 sudo[275924]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:29 compute-0 sudo[275949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:43:29 compute-0 sudo[275949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:29 compute-0 sudo[275949]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:29 compute-0 sudo[275974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:29 compute-0 sudo[275974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:29 compute-0 sudo[275974]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:29 compute-0 sudo[275999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:43:29 compute-0 sudo[275999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:29 compute-0 ceph-mon[74802]: pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:30 compute-0 podman[276095]: 2025-10-01 13:43:30.12943519 +0000 UTC m=+0.095624130 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:43:30 compute-0 podman[276095]: 2025-10-01 13:43:30.281262914 +0000 UTC m=+0.247451834 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:43:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:31 compute-0 sudo[275999]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:43:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:43:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:31 compute-0 sudo[276255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:31 compute-0 sudo[276255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:31 compute-0 sudo[276255]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:31 compute-0 sudo[276280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:43:31 compute-0 sudo[276280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:31 compute-0 sudo[276280]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:31 compute-0 sudo[276305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:31 compute-0 sudo[276305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:31 compute-0 sudo[276305]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:31 compute-0 sudo[276330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:43:31 compute-0 sudo[276330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:31 compute-0 sudo[276330]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:43:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:43:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:43:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:43:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:43:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:32 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6b59a1bb-d282-4090-bc7f-a0f9767a3e0b does not exist
Oct 01 13:43:32 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a1554867-aed6-4e21-9996-452691b1edf8 does not exist
Oct 01 13:43:32 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev b181535c-6ad4-4503-b9a7-42f1e7a56aa8 does not exist
Oct 01 13:43:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:43:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:43:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:43:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:43:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:43:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:43:32 compute-0 ceph-mon[74802]: pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:43:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:43:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:43:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:43:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:43:32 compute-0 sudo[276385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:32 compute-0 sudo[276385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:32 compute-0 sudo[276385]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:32 compute-0 sudo[276410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:43:32 compute-0 sudo[276410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:32 compute-0 sudo[276410]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:32 compute-0 sudo[276435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:32 compute-0 sudo[276435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:32 compute-0 sudo[276435]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:32 compute-0 sudo[276460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:43:32 compute-0 sudo[276460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:32 compute-0 podman[276523]: 2025-10-01 13:43:32.756216263 +0000 UTC m=+0.068198164 container create 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct 01 13:43:32 compute-0 systemd[1]: Started libpod-conmon-1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e.scope.
Oct 01 13:43:32 compute-0 podman[276523]: 2025-10-01 13:43:32.726935448 +0000 UTC m=+0.038917389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:43:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:43:32 compute-0 podman[276523]: 2025-10-01 13:43:32.848407214 +0000 UTC m=+0.160389075 container init 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:43:32 compute-0 podman[276523]: 2025-10-01 13:43:32.857304665 +0000 UTC m=+0.169286526 container start 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:43:32 compute-0 podman[276523]: 2025-10-01 13:43:32.861758625 +0000 UTC m=+0.173740516 container attach 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:43:32 compute-0 agitated_bartik[276539]: 167 167
Oct 01 13:43:32 compute-0 systemd[1]: libpod-1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e.scope: Deactivated successfully.
Oct 01 13:43:32 compute-0 podman[276523]: 2025-10-01 13:43:32.865017998 +0000 UTC m=+0.176999859 container died 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d57c03452c8cee17eeef88b9cd7124fe2accea570983335401ce0c1cb064ea45-merged.mount: Deactivated successfully.
Oct 01 13:43:32 compute-0 podman[276523]: 2025-10-01 13:43:32.925016963 +0000 UTC m=+0.236998844 container remove 1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:43:32 compute-0 systemd[1]: libpod-conmon-1a818f27d12941267a5a7272271645871e7aea3320261fa713a6748219e47c7e.scope: Deactivated successfully.
Oct 01 13:43:33 compute-0 podman[276564]: 2025-10-01 13:43:33.203545386 +0000 UTC m=+0.097010014 container create 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 13:43:33 compute-0 podman[276564]: 2025-10-01 13:43:33.149154499 +0000 UTC m=+0.042619117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:43:33 compute-0 systemd[1]: Started libpod-conmon-9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10.scope.
Oct 01 13:43:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:33 compute-0 podman[276564]: 2025-10-01 13:43:33.544900363 +0000 UTC m=+0.438365031 container init 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:43:33 compute-0 podman[276564]: 2025-10-01 13:43:33.553513955 +0000 UTC m=+0.446978583 container start 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:43:33 compute-0 podman[276564]: 2025-10-01 13:43:33.65915865 +0000 UTC m=+0.552623278 container attach 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:43:34 compute-0 ceph-mon[74802]: pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:34 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:43:34.588 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:43:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:34 compute-0 romantic_fermi[276580]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:43:34 compute-0 romantic_fermi[276580]: --> relative data size: 1.0
Oct 01 13:43:34 compute-0 romantic_fermi[276580]: --> All data devices are unavailable
Oct 01 13:43:34 compute-0 systemd[1]: libpod-9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10.scope: Deactivated successfully.
Oct 01 13:43:34 compute-0 systemd[1]: libpod-9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10.scope: Consumed 1.198s CPU time.
Oct 01 13:43:34 compute-0 podman[276564]: 2025-10-01 13:43:34.805370788 +0000 UTC m=+1.698835426 container died 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:43:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-20799b0cd404f031ba15defd08b5254d8d16d39143d658fdaee281908efde430-merged.mount: Deactivated successfully.
Oct 01 13:43:35 compute-0 podman[276564]: 2025-10-01 13:43:35.043093134 +0000 UTC m=+1.936557772 container remove 9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:43:35 compute-0 systemd[1]: libpod-conmon-9d5edbe7e889647f290f3ac36a78aa0329e6dcbb1715c3ef0b0b0dd404384d10.scope: Deactivated successfully.
Oct 01 13:43:35 compute-0 sudo[276460]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:35 compute-0 sudo[276623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:35 compute-0 sudo[276623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:35 compute-0 sudo[276623]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:35 compute-0 sudo[276648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:43:35 compute-0 sudo[276648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:35 compute-0 sudo[276648]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:35 compute-0 sudo[276673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:35 compute-0 sudo[276673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:35 compute-0 sudo[276673]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:35 compute-0 sudo[276698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:43:35 compute-0 sudo[276698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:35 compute-0 podman[276763]: 2025-10-01 13:43:35.960359323 +0000 UTC m=+0.113789183 container create b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:43:35 compute-0 podman[276763]: 2025-10-01 13:43:35.882061321 +0000 UTC m=+0.035491231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:43:36 compute-0 systemd[1]: Started libpod-conmon-b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7.scope.
Oct 01 13:43:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:43:36 compute-0 podman[276763]: 2025-10-01 13:43:36.126317763 +0000 UTC m=+0.279747643 container init b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 13:43:36 compute-0 podman[276763]: 2025-10-01 13:43:36.140966036 +0000 UTC m=+0.294395886 container start b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:43:36 compute-0 busy_keldysh[276779]: 167 167
Oct 01 13:43:36 compute-0 systemd[1]: libpod-b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7.scope: Deactivated successfully.
Oct 01 13:43:36 compute-0 conmon[276779]: conmon b0d8ff1c0dec03bdc184 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7.scope/container/memory.events
Oct 01 13:43:36 compute-0 podman[276763]: 2025-10-01 13:43:36.179388438 +0000 UTC m=+0.332818268 container attach b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:43:36 compute-0 podman[276763]: 2025-10-01 13:43:36.181133804 +0000 UTC m=+0.334563634 container died b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:43:36 compute-0 ceph-mon[74802]: pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b35d88e6ec84a54695a0a6565f806f7222c1a14b79706473e25f512b89d44d67-merged.mount: Deactivated successfully.
Oct 01 13:43:36 compute-0 podman[276763]: 2025-10-01 13:43:36.263717611 +0000 UTC m=+0.417147431 container remove b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct 01 13:43:36 compute-0 systemd[1]: libpod-conmon-b0d8ff1c0dec03bdc1849a2a80c6a7332242f126f0877e6e635ca614f87a71d7.scope: Deactivated successfully.
Oct 01 13:43:36 compute-0 podman[276807]: 2025-10-01 13:43:36.467320009 +0000 UTC m=+0.047804570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:43:36 compute-0 podman[276807]: 2025-10-01 13:43:36.662688398 +0000 UTC m=+0.243172879 container create 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:43:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:36 compute-0 systemd[1]: Started libpod-conmon-5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d.scope.
Oct 01 13:43:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:37 compute-0 podman[276807]: 2025-10-01 13:43:37.117487566 +0000 UTC m=+0.697972087 container init 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:43:37 compute-0 podman[276807]: 2025-10-01 13:43:37.132656595 +0000 UTC m=+0.713141116 container start 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:43:37 compute-0 podman[276807]: 2025-10-01 13:43:37.139636115 +0000 UTC m=+0.720120616 container attach 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:43:37 compute-0 ceph-mon[74802]: pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]: {
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:     "0": [
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:         {
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "devices": [
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "/dev/loop3"
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             ],
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_name": "ceph_lv0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_size": "21470642176",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "name": "ceph_lv0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "tags": {
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.cluster_name": "ceph",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.crush_device_class": "",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.encrypted": "0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.osd_id": "0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.type": "block",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.vdo": "0"
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             },
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "type": "block",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "vg_name": "ceph_vg0"
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:         }
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:     ],
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:     "1": [
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:         {
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "devices": [
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "/dev/loop4"
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             ],
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_name": "ceph_lv1",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_size": "21470642176",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "name": "ceph_lv1",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "tags": {
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.cluster_name": "ceph",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.crush_device_class": "",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.encrypted": "0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.osd_id": "1",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.type": "block",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.vdo": "0"
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             },
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "type": "block",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "vg_name": "ceph_vg1"
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:         }
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:     ],
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:     "2": [
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:         {
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "devices": [
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "/dev/loop5"
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             ],
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_name": "ceph_lv2",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_size": "21470642176",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "name": "ceph_lv2",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "tags": {
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.cluster_name": "ceph",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.crush_device_class": "",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.encrypted": "0",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.osd_id": "2",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.type": "block",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:                 "ceph.vdo": "0"
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             },
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "type": "block",
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:             "vg_name": "ceph_vg2"
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:         }
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]:     ]
Oct 01 13:43:37 compute-0 gallant_sutherland[276824]: }
Oct 01 13:43:37 compute-0 systemd[1]: libpod-5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d.scope: Deactivated successfully.
Oct 01 13:43:37 compute-0 podman[276807]: 2025-10-01 13:43:37.939822068 +0000 UTC m=+1.520306549 container died 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba59212fc28962c803a063eb51dc3362bd2919f3313eaba638d2a8bd9ac11e57-merged.mount: Deactivated successfully.
Oct 01 13:43:37 compute-0 podman[276807]: 2025-10-01 13:43:37.99941867 +0000 UTC m=+1.579903191 container remove 5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sutherland, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 13:43:38 compute-0 systemd[1]: libpod-conmon-5b5cb2986c05ce194836c2975ac43d43c0c4491e163a0dd4a888392a9925eb4d.scope: Deactivated successfully.
Oct 01 13:43:38 compute-0 sudo[276698]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:38 compute-0 sudo[276847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:38 compute-0 sudo[276847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:38 compute-0 sudo[276847]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:38 compute-0 sudo[276872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:43:38 compute-0 sudo[276872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:38 compute-0 sudo[276872]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:38 compute-0 sudo[276897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:38 compute-0 sudo[276897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:38 compute-0 sudo[276897]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:38 compute-0 sudo[276922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:43:38 compute-0 sudo[276922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:38 compute-0 podman[276990]: 2025-10-01 13:43:38.702642902 +0000 UTC m=+0.056417392 container create 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:43:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:38 compute-0 systemd[1]: Started libpod-conmon-796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d.scope.
Oct 01 13:43:38 compute-0 podman[276990]: 2025-10-01 13:43:38.678049306 +0000 UTC m=+0.031823876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:43:38 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:43:38 compute-0 podman[276990]: 2025-10-01 13:43:38.806343416 +0000 UTC m=+0.160117996 container init 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 13:43:38 compute-0 podman[276990]: 2025-10-01 13:43:38.818397447 +0000 UTC m=+0.172171967 container start 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:43:38 compute-0 podman[276990]: 2025-10-01 13:43:38.822901999 +0000 UTC m=+0.176676569 container attach 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:43:38 compute-0 amazing_robinson[277007]: 167 167
Oct 01 13:43:38 compute-0 systemd[1]: libpod-796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d.scope: Deactivated successfully.
Oct 01 13:43:38 compute-0 podman[276990]: 2025-10-01 13:43:38.826932466 +0000 UTC m=+0.180706986 container died 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2de934d21f9f43e5ee59ae1ddb7bddf003de21350685fb01dab15b83e9afa81-merged.mount: Deactivated successfully.
Oct 01 13:43:38 compute-0 podman[276990]: 2025-10-01 13:43:38.877768961 +0000 UTC m=+0.231543481 container remove 796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_robinson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:43:38 compute-0 systemd[1]: libpod-conmon-796fc84d08df89523462784d368fb5ea6936c227f90e5d3716fe8d53c05f1d5d.scope: Deactivated successfully.
Oct 01 13:43:39 compute-0 podman[277030]: 2025-10-01 13:43:39.142906253 +0000 UTC m=+0.059750838 container create 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:43:39 compute-0 systemd[1]: Started libpod-conmon-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope.
Oct 01 13:43:39 compute-0 podman[277030]: 2025-10-01 13:43:39.124096928 +0000 UTC m=+0.040941533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:43:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:43:39 compute-0 podman[277030]: 2025-10-01 13:43:39.251410818 +0000 UTC m=+0.168255413 container init 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:43:39 compute-0 podman[277030]: 2025-10-01 13:43:39.266880666 +0000 UTC m=+0.183725271 container start 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:43:39 compute-0 podman[277030]: 2025-10-01 13:43:39.271227834 +0000 UTC m=+0.188072459 container attach 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:43:39 compute-0 ceph-mon[74802]: pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]: {
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "osd_id": 0,
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "type": "bluestore"
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:     },
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "osd_id": 2,
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "type": "bluestore"
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:     },
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "osd_id": 1,
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:         "type": "bluestore"
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]:     }
Oct 01 13:43:40 compute-0 brave_mirzakhani[277046]: }
Oct 01 13:43:40 compute-0 systemd[1]: libpod-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope: Deactivated successfully.
Oct 01 13:43:40 compute-0 systemd[1]: libpod-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope: Consumed 1.060s CPU time.
Oct 01 13:43:40 compute-0 conmon[277046]: conmon 91a993d7110f94e15fd9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope/container/memory.events
Oct 01 13:43:40 compute-0 podman[277030]: 2025-10-01 13:43:40.320821511 +0000 UTC m=+1.237666096 container died 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b8a801374d3f859799663d5bcd9126b5c6d3839fa4a297d1cafa41e930c44ff-merged.mount: Deactivated successfully.
Oct 01 13:43:40 compute-0 podman[277030]: 2025-10-01 13:43:40.371494751 +0000 UTC m=+1.288339336 container remove 91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:43:40 compute-0 systemd[1]: libpod-conmon-91a993d7110f94e15fd918661610d51140d51897b812f01baae83883992b829e.scope: Deactivated successfully.
Oct 01 13:43:40 compute-0 sudo[276922]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:43:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:43:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ced96db3-49b1-4947-8a68-510fc26cc28e does not exist
Oct 01 13:43:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8130182d-368d-4663-be9e-0882f71d6f89 does not exist
Oct 01 13:43:40 compute-0 sudo[277093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:43:40 compute-0 sudo[277093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:40 compute-0 sudo[277093]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:40 compute-0 sudo[277118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:43:40 compute-0 sudo[277118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:43:40 compute-0 sudo[277118]: pam_unix(sudo:session): session closed for user root
Oct 01 13:43:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:43:41 compute-0 ceph-mon[74802]: pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:43 compute-0 ceph-mon[74802]: pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:45 compute-0 ceph-mon[74802]: pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:47 compute-0 ceph-mon[74802]: pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:43:47
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root']
Oct 01 13:43:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:43:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:49 compute-0 ceph-mon[74802]: pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:51 compute-0 podman[277144]: 2025-10-01 13:43:51.524657986 +0000 UTC m=+0.071344824 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:43:51 compute-0 podman[277145]: 2025-10-01 13:43:51.529641503 +0000 UTC m=+0.071948452 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 13:43:51 compute-0 podman[277149]: 2025-10-01 13:43:51.542456009 +0000 UTC m=+0.072368727 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 01 13:43:51 compute-0 podman[277143]: 2025-10-01 13:43:51.562885743 +0000 UTC m=+0.112179673 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct 01 13:43:51 compute-0 ceph-mon[74802]: pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:53 compute-0 ceph-mon[74802]: pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:43:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3603793647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:43:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:43:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3603793647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:43:55 compute-0 ceph-mon[74802]: pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3603793647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:43:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3603793647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:43:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:43:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:43:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:43:57 compute-0 ceph-mon[74802]: pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:43:59 compute-0 ceph-mon[74802]: pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:01 compute-0 nova_compute[260022]: 2025-10-01 13:44:01.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:01 compute-0 nova_compute[260022]: 2025-10-01 13:44:01.384 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:44:01 compute-0 nova_compute[260022]: 2025-10-01 13:44:01.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:44:01 compute-0 nova_compute[260022]: 2025-10-01 13:44:01.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:44:01 compute-0 nova_compute[260022]: 2025-10-01 13:44:01.385 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:44:01 compute-0 nova_compute[260022]: 2025-10-01 13:44:01.385 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:44:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:44:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3607660270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:44:01 compute-0 nova_compute[260022]: 2025-10-01 13:44:01.887 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:44:01 compute-0 ceph-mon[74802]: pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3607660270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.067 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.069 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5161MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.069 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.069 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.157 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.157 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.182 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:44:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.405604) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242405809, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1440, "num_deletes": 505, "total_data_size": 1847092, "memory_usage": 1885464, "flush_reason": "Manual Compaction"}
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242422527, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1573272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23152, "largest_seqno": 24591, "table_properties": {"data_size": 1567313, "index_size": 2715, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16088, "raw_average_key_size": 18, "raw_value_size": 1553226, "raw_average_value_size": 1833, "num_data_blocks": 123, "num_entries": 847, "num_filter_entries": 847, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326127, "oldest_key_time": 1759326127, "file_creation_time": 1759326242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 17153 microseconds, and 8923 cpu microseconds.
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.422792) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1573272 bytes OK
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.422941) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.424793) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.424824) EVENT_LOG_v1 {"time_micros": 1759326242424813, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.424856) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1839627, prev total WAL file size 1839627, number of live WAL files 2.
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.426811) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353038' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1536KB)], [53(9233KB)]
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242426879, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11028882, "oldest_snapshot_seqno": -1}
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4626 keys, 7867304 bytes, temperature: kUnknown
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242487355, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7867304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7835430, "index_size": 19220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115497, "raw_average_key_size": 24, "raw_value_size": 7750769, "raw_average_value_size": 1675, "num_data_blocks": 800, "num_entries": 4626, "num_filter_entries": 4626, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.487713) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7867304 bytes
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.489171) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 129.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(12.0) write-amplify(5.0) OK, records in: 5632, records dropped: 1006 output_compression: NoCompression
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.489193) EVENT_LOG_v1 {"time_micros": 1759326242489182, "job": 28, "event": "compaction_finished", "compaction_time_micros": 60601, "compaction_time_cpu_micros": 38785, "output_level": 6, "num_output_files": 1, "total_output_size": 7867304, "num_input_records": 5632, "num_output_records": 4626, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242489629, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326242491760, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.426651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:44:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:44:02.491861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:44:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:44:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180216587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.610 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.620 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.642 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.644 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:44:02 compute-0 nova_compute[260022]: 2025-10-01 13:44:02.645 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:44:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:03 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1180216587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:44:03 compute-0 ceph-mon[74802]: pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:03 compute-0 nova_compute[260022]: 2025-10-01 13:44:03.645 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:04 compute-0 nova_compute[260022]: 2025-10-01 13:44:04.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:05 compute-0 nova_compute[260022]: 2025-10-01 13:44:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:05 compute-0 nova_compute[260022]: 2025-10-01 13:44:05.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:44:05 compute-0 ceph-mon[74802]: pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:07 compute-0 nova_compute[260022]: 2025-10-01 13:44:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:07 compute-0 nova_compute[260022]: 2025-10-01 13:44:07.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:44:07 compute-0 nova_compute[260022]: 2025-10-01 13:44:07.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:44:07 compute-0 nova_compute[260022]: 2025-10-01 13:44:07.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:44:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:07 compute-0 ceph-mon[74802]: pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:08 compute-0 nova_compute[260022]: 2025-10-01 13:44:08.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:09 compute-0 ceph-mon[74802]: pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:10 compute-0 nova_compute[260022]: 2025-10-01 13:44:10.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:10 compute-0 nova_compute[260022]: 2025-10-01 13:44:10.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:11 compute-0 ceph-mon[74802]: pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:44:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:44:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:44:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:44:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:44:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:44:12 compute-0 nova_compute[260022]: 2025-10-01 13:44:12.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:13 compute-0 ceph-mon[74802]: pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:15 compute-0 nova_compute[260022]: 2025-10-01 13:44:15.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:44:15 compute-0 ceph-mon[74802]: pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:44:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:44:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:44:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:44:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:44:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:44:17 compute-0 ceph-mon[74802]: pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:19 compute-0 ceph-mon[74802]: pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:21 compute-0 ceph-mon[74802]: pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:22 compute-0 podman[277266]: 2025-10-01 13:44:22.53794701 +0000 UTC m=+0.078629084 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:44:22 compute-0 podman[277268]: 2025-10-01 13:44:22.549933349 +0000 UTC m=+0.074569226 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:44:22 compute-0 podman[277267]: 2025-10-01 13:44:22.568646709 +0000 UTC m=+0.093587946 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 01 13:44:22 compute-0 podman[277265]: 2025-10-01 13:44:22.582747394 +0000 UTC m=+0.121216428 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2)
Oct 01 13:44:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:23 compute-0 ceph-mon[74802]: pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:25 compute-0 ceph-mon[74802]: pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:27 compute-0 ceph-mon[74802]: pgmap v1206: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:44:29.533 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:44:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:44:29.535 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:44:29 compute-0 ceph-mon[74802]: pgmap v1207: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:31 compute-0 ceph-mon[74802]: pgmap v1208: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:33 compute-0 ceph-mon[74802]: pgmap v1209: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:35 compute-0 ceph-mon[74802]: pgmap v1210: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct 01 13:44:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct 01 13:44:37 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct 01 13:44:37 compute-0 ceph-mon[74802]: pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 818 B/s wr, 7 op/s
Oct 01 13:44:38 compute-0 ceph-mon[74802]: osdmap e147: 3 total, 3 up, 3 in
Oct 01 13:44:39 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:44:39.536 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:44:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct 01 13:44:39 compute-0 ceph-mon[74802]: pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 818 B/s wr, 7 op/s
Oct 01 13:44:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct 01 13:44:39 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct 01 13:44:40 compute-0 sudo[277349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:44:40 compute-0 sudo[277349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:40 compute-0 sudo[277349]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:40 compute-0 sudo[277374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:44:40 compute-0 sudo[277374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:40 compute-0 sudo[277374]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 1023 B/s wr, 8 op/s
Oct 01 13:44:40 compute-0 sudo[277399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:44:40 compute-0 sudo[277399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:40 compute-0 sudo[277399]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:40 compute-0 sudo[277424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:44:40 compute-0 sudo[277424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:40 compute-0 ceph-mon[74802]: osdmap e148: 3 total, 3 up, 3 in
Oct 01 13:44:41 compute-0 sudo[277424]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:44:41 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:44:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:44:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:44:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:44:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:44:41 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 55f765c8-4254-4bc1-91d9-6991e283102d does not exist
Oct 01 13:44:41 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 23e60e8a-5395-4cdf-9b94-cc2137eed963 does not exist
Oct 01 13:44:41 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 83120ba7-da1d-48da-9a45-7feb5291e50f does not exist
Oct 01 13:44:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:44:41 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:44:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:44:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:44:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:44:41 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:44:41 compute-0 sudo[277479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:44:41 compute-0 sudo[277479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:41 compute-0 sudo[277479]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:41 compute-0 sudo[277504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:44:41 compute-0 sudo[277504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:41 compute-0 sudo[277504]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:41 compute-0 sudo[277529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:44:41 compute-0 sudo[277529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:41 compute-0 sudo[277529]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:41 compute-0 sudo[277554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:44:41 compute-0 sudo[277554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:42 compute-0 ceph-mon[74802]: pgmap v1215: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 1023 B/s wr, 8 op/s
Oct 01 13:44:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:44:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:44:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:44:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:44:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:44:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:44:42 compute-0 podman[277621]: 2025-10-01 13:44:42.276350726 +0000 UTC m=+0.045783307 container create 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:44:42 compute-0 systemd[1]: Started libpod-conmon-7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558.scope.
Oct 01 13:44:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:44:42 compute-0 podman[277621]: 2025-10-01 13:44:42.257100898 +0000 UTC m=+0.026533469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:44:42 compute-0 podman[277621]: 2025-10-01 13:44:42.354784642 +0000 UTC m=+0.124217253 container init 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:44:42 compute-0 podman[277621]: 2025-10-01 13:44:42.367754342 +0000 UTC m=+0.137186913 container start 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:44:42 compute-0 podman[277621]: 2025-10-01 13:44:42.370948883 +0000 UTC m=+0.140381464 container attach 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:44:42 compute-0 objective_faraday[277637]: 167 167
Oct 01 13:44:42 compute-0 systemd[1]: libpod-7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558.scope: Deactivated successfully.
Oct 01 13:44:42 compute-0 podman[277621]: 2025-10-01 13:44:42.374217056 +0000 UTC m=+0.143649627 container died 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:44:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc30307af531efa076ca830181a18aba6633ed126071a919a7cff087a979e42-merged.mount: Deactivated successfully.
Oct 01 13:44:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:42 compute-0 podman[277621]: 2025-10-01 13:44:42.421068055 +0000 UTC m=+0.190500656 container remove 7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_faraday, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:44:42 compute-0 systemd[1]: libpod-conmon-7006c48c585cd3e6857a5c2cb07c9219efcc3e0e6ea24c2b7404b7eb1e673558.scope: Deactivated successfully.
Oct 01 13:44:42 compute-0 podman[277661]: 2025-10-01 13:44:42.61221309 +0000 UTC m=+0.069321660 container create ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct 01 13:44:42 compute-0 systemd[1]: Started libpod-conmon-ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f.scope.
Oct 01 13:44:42 compute-0 podman[277661]: 2025-10-01 13:44:42.584124693 +0000 UTC m=+0.041233303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:44:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:42 compute-0 podman[277661]: 2025-10-01 13:44:42.713232438 +0000 UTC m=+0.170340998 container init ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:44:42 compute-0 podman[277661]: 2025-10-01 13:44:42.725969071 +0000 UTC m=+0.183077641 container start ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:44:42 compute-0 podman[277661]: 2025-10-01 13:44:42.73036821 +0000 UTC m=+0.187476840 container attach ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:44:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 49 op/s
Oct 01 13:44:43 compute-0 confident_lamport[277678]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:44:43 compute-0 confident_lamport[277678]: --> relative data size: 1.0
Oct 01 13:44:43 compute-0 confident_lamport[277678]: --> All data devices are unavailable
Oct 01 13:44:43 compute-0 systemd[1]: libpod-ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f.scope: Deactivated successfully.
Oct 01 13:44:43 compute-0 systemd[1]: libpod-ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f.scope: Consumed 1.136s CPU time.
Oct 01 13:44:43 compute-0 podman[277661]: 2025-10-01 13:44:43.906458801 +0000 UTC m=+1.363567361 container died ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 13:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0c8bb8bc6412e67305d186f85b21ea62cc290208a435f1224e5b3e8902d3259-merged.mount: Deactivated successfully.
Oct 01 13:44:43 compute-0 podman[277661]: 2025-10-01 13:44:43.988067168 +0000 UTC m=+1.445175728 container remove ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:44:43 compute-0 systemd[1]: libpod-conmon-ae07b7e4dcdf57669d3cb56c8b7bda05b906c9f0a5c789ad8b12c14a3f03805f.scope: Deactivated successfully.
Oct 01 13:44:44 compute-0 sudo[277554]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:44 compute-0 ceph-mon[74802]: pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 49 op/s
Oct 01 13:44:44 compute-0 sudo[277719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:44:44 compute-0 sudo[277719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:44 compute-0 sudo[277719]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:44 compute-0 sudo[277744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:44:44 compute-0 sudo[277744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:44 compute-0 sudo[277744]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:44 compute-0 sudo[277769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:44:44 compute-0 sudo[277769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:44 compute-0 sudo[277769]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:44 compute-0 sudo[277794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:44:44 compute-0 sudo[277794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 49 op/s
Oct 01 13:44:44 compute-0 podman[277859]: 2025-10-01 13:44:44.848530525 +0000 UTC m=+0.051850179 container create 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:44:44 compute-0 systemd[1]: Started libpod-conmon-6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22.scope.
Oct 01 13:44:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:44:44 compute-0 podman[277859]: 2025-10-01 13:44:44.82526688 +0000 UTC m=+0.028586634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:44:44 compute-0 podman[277859]: 2025-10-01 13:44:44.937320037 +0000 UTC m=+0.140639731 container init 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 13:44:44 compute-0 podman[277859]: 2025-10-01 13:44:44.948919834 +0000 UTC m=+0.152239508 container start 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 13:44:44 compute-0 podman[277859]: 2025-10-01 13:44:44.952822346 +0000 UTC m=+0.156142070 container attach 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:44:44 compute-0 adoring_carson[277875]: 167 167
Oct 01 13:44:44 compute-0 systemd[1]: libpod-6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22.scope: Deactivated successfully.
Oct 01 13:44:44 compute-0 podman[277859]: 2025-10-01 13:44:44.95703223 +0000 UTC m=+0.160351914 container died 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6833f2d423a0ea77bc61744c224bde2a67191152b708bd7ac737de4b6574d4a-merged.mount: Deactivated successfully.
Oct 01 13:44:45 compute-0 podman[277859]: 2025-10-01 13:44:45.005462409 +0000 UTC m=+0.208782093 container remove 6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_carson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct 01 13:44:45 compute-0 systemd[1]: libpod-conmon-6be27dfaa3ec6731b960a42a07e48b6c912ade05e38dd6f4f4cd4b5549e3fc22.scope: Deactivated successfully.
Oct 01 13:44:45 compute-0 podman[277899]: 2025-10-01 13:44:45.300290307 +0000 UTC m=+0.070702483 container create f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:44:45 compute-0 systemd[1]: Started libpod-conmon-f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb.scope.
Oct 01 13:44:45 compute-0 podman[277899]: 2025-10-01 13:44:45.272147778 +0000 UTC m=+0.042560004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:44:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:45 compute-0 podman[277899]: 2025-10-01 13:44:45.423003752 +0000 UTC m=+0.193415978 container init f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:44:45 compute-0 podman[277899]: 2025-10-01 13:44:45.43971867 +0000 UTC m=+0.210130846 container start f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:44:45 compute-0 podman[277899]: 2025-10-01 13:44:45.443605722 +0000 UTC m=+0.214017908 container attach f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 13:44:46 compute-0 ceph-mon[74802]: pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 49 op/s
Oct 01 13:44:46 compute-0 nice_carver[277915]: {
Oct 01 13:44:46 compute-0 nice_carver[277915]:     "0": [
Oct 01 13:44:46 compute-0 nice_carver[277915]:         {
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "devices": [
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "/dev/loop3"
Oct 01 13:44:46 compute-0 nice_carver[277915]:             ],
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_name": "ceph_lv0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_size": "21470642176",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "name": "ceph_lv0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "tags": {
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.cluster_name": "ceph",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.crush_device_class": "",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.encrypted": "0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.osd_id": "0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.type": "block",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.vdo": "0"
Oct 01 13:44:46 compute-0 nice_carver[277915]:             },
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "type": "block",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "vg_name": "ceph_vg0"
Oct 01 13:44:46 compute-0 nice_carver[277915]:         }
Oct 01 13:44:46 compute-0 nice_carver[277915]:     ],
Oct 01 13:44:46 compute-0 nice_carver[277915]:     "1": [
Oct 01 13:44:46 compute-0 nice_carver[277915]:         {
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "devices": [
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "/dev/loop4"
Oct 01 13:44:46 compute-0 nice_carver[277915]:             ],
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_name": "ceph_lv1",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_size": "21470642176",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "name": "ceph_lv1",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "tags": {
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.cluster_name": "ceph",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.crush_device_class": "",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.encrypted": "0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.osd_id": "1",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.type": "block",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.vdo": "0"
Oct 01 13:44:46 compute-0 nice_carver[277915]:             },
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "type": "block",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "vg_name": "ceph_vg1"
Oct 01 13:44:46 compute-0 nice_carver[277915]:         }
Oct 01 13:44:46 compute-0 nice_carver[277915]:     ],
Oct 01 13:44:46 compute-0 nice_carver[277915]:     "2": [
Oct 01 13:44:46 compute-0 nice_carver[277915]:         {
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "devices": [
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "/dev/loop5"
Oct 01 13:44:46 compute-0 nice_carver[277915]:             ],
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_name": "ceph_lv2",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_size": "21470642176",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "name": "ceph_lv2",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "tags": {
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.cluster_name": "ceph",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.crush_device_class": "",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.encrypted": "0",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.osd_id": "2",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.type": "block",
Oct 01 13:44:46 compute-0 nice_carver[277915]:                 "ceph.vdo": "0"
Oct 01 13:44:46 compute-0 nice_carver[277915]:             },
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "type": "block",
Oct 01 13:44:46 compute-0 nice_carver[277915]:             "vg_name": "ceph_vg2"
Oct 01 13:44:46 compute-0 nice_carver[277915]:         }
Oct 01 13:44:46 compute-0 nice_carver[277915]:     ]
Oct 01 13:44:46 compute-0 nice_carver[277915]: }
Oct 01 13:44:46 compute-0 systemd[1]: libpod-f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb.scope: Deactivated successfully.
Oct 01 13:44:46 compute-0 podman[277899]: 2025-10-01 13:44:46.233465679 +0000 UTC m=+1.003877835 container died f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b8225886cfcf4685addc5b4b468b11e0e8ebe7687b836223aa59d664594e4f1-merged.mount: Deactivated successfully.
Oct 01 13:44:46 compute-0 podman[277899]: 2025-10-01 13:44:46.310352227 +0000 UTC m=+1.080764403 container remove f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carver, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:44:46 compute-0 systemd[1]: libpod-conmon-f4b36588ec29c8df5c3d9f388ffcd5992c7223edb1f935de77875264f56328cb.scope: Deactivated successfully.
Oct 01 13:44:46 compute-0 sudo[277794]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:46 compute-0 sudo[277935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:44:46 compute-0 sudo[277935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:46 compute-0 sudo[277935]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:46 compute-0 sudo[277960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:44:46 compute-0 sudo[277960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:46 compute-0 sudo[277960]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:46 compute-0 sudo[277985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:44:46 compute-0 sudo[277985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:46 compute-0 sudo[277985]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 KiB/s wr, 37 op/s
Oct 01 13:44:46 compute-0 sudo[278010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:44:46 compute-0 sudo[278010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:47 compute-0 podman[278075]: 2025-10-01 13:44:47.247518025 +0000 UTC m=+0.072008034 container create 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:44:47 compute-0 systemd[1]: Started libpod-conmon-0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa.scope.
Oct 01 13:44:47 compute-0 podman[278075]: 2025-10-01 13:44:47.214914375 +0000 UTC m=+0.039404454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:44:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:44:47 compute-0 podman[278075]: 2025-10-01 13:44:47.332612271 +0000 UTC m=+0.157102270 container init 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 13:44:47 compute-0 podman[278075]: 2025-10-01 13:44:47.33891967 +0000 UTC m=+0.163409669 container start 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:44:47 compute-0 podman[278075]: 2025-10-01 13:44:47.343157395 +0000 UTC m=+0.167647434 container attach 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:44:47 compute-0 compassionate_varahamihira[278092]: 167 167
Oct 01 13:44:47 compute-0 systemd[1]: libpod-0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa.scope: Deactivated successfully.
Oct 01 13:44:47 compute-0 podman[278075]: 2025-10-01 13:44:47.346149199 +0000 UTC m=+0.170639198 container died 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct 01 13:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a29d1f5cc33bbbcdb99aee0fc63d2a62bc4762cd829ad8cd526af079a55f304-merged.mount: Deactivated successfully.
Oct 01 13:44:47 compute-0 podman[278075]: 2025-10-01 13:44:47.38579323 +0000 UTC m=+0.210283229 container remove 0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:44:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct 01 13:44:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct 01 13:44:47 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct 01 13:44:47 compute-0 systemd[1]: libpod-conmon-0eb13c04c852198b118c36c03ec67405039161229dfaa0fa4e72298d135344aa.scope: Deactivated successfully.
Oct 01 13:44:47 compute-0 podman[278116]: 2025-10-01 13:44:47.579839907 +0000 UTC m=+0.050547507 container create f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:44:47 compute-0 systemd[1]: Started libpod-conmon-f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4.scope.
Oct 01 13:44:47 compute-0 podman[278116]: 2025-10-01 13:44:47.55839188 +0000 UTC m=+0.029099470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:44:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:44:47 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 13:44:47 compute-0 podman[278116]: 2025-10-01 13:44:47.688561909 +0000 UTC m=+0.159269519 container init f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:44:47 compute-0 podman[278116]: 2025-10-01 13:44:47.707284131 +0000 UTC m=+0.177991731 container start f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 13:44:47 compute-0 podman[278116]: 2025-10-01 13:44:47.711896326 +0000 UTC m=+0.182603936 container attach f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:44:47
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control']
Oct 01 13:44:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:44:48 compute-0 ceph-mon[74802]: pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 KiB/s wr, 37 op/s
Oct 01 13:44:48 compute-0 ceph-mon[74802]: osdmap e149: 3 total, 3 up, 3 in
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:44:48 compute-0 determined_pasteur[278133]: {
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "osd_id": 0,
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "type": "bluestore"
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:     },
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "osd_id": 2,
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "type": "bluestore"
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:     },
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "osd_id": 1,
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:         "type": "bluestore"
Oct 01 13:44:48 compute-0 determined_pasteur[278133]:     }
Oct 01 13:44:48 compute-0 determined_pasteur[278133]: }
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.9 KiB/s wr, 37 op/s
Oct 01 13:44:48 compute-0 systemd[1]: libpod-f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4.scope: Deactivated successfully.
Oct 01 13:44:48 compute-0 systemd[1]: libpod-f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4.scope: Consumed 1.060s CPU time.
Oct 01 13:44:48 compute-0 podman[278116]: 2025-10-01 13:44:48.753908494 +0000 UTC m=+1.224616084 container died f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-68350acb2449d54c01bd6bc20b38a57f4289cd28611f12e571dd1632c06f0c3b-merged.mount: Deactivated successfully.
Oct 01 13:44:48 compute-0 podman[278116]: 2025-10-01 13:44:48.820327271 +0000 UTC m=+1.291034851 container remove f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pasteur, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:44:48 compute-0 systemd[1]: libpod-conmon-f93c1b37d7187d6add62a3fe2117f0534532c26033be617f91d81b1c199cd3a4.scope: Deactivated successfully.
Oct 01 13:44:48 compute-0 sudo[278010]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:44:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:44:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:44:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev bd5b40e9-4de8-48f9-a0a9-b7a54c23bd78 does not exist
Oct 01 13:44:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ad4c826-f52a-4a4e-90f5-8793d1f2a8f3 does not exist
Oct 01 13:44:48 compute-0 sudo[278180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:44:48 compute-0 sudo[278180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:48 compute-0 sudo[278180]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:49 compute-0 sudo[278205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:44:49 compute-0 sudo[278205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:44:49 compute-0 sudo[278205]: pam_unix(sudo:session): session closed for user root
Oct 01 13:44:49 compute-0 ceph-mon[74802]: pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.9 KiB/s wr, 37 op/s
Oct 01 13:44:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:44:49 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:44:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 32 op/s
Oct 01 13:44:51 compute-0 ceph-mon[74802]: pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 32 op/s
Oct 01 13:44:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:53 compute-0 podman[278231]: 2025-10-01 13:44:53.556884123 +0000 UTC m=+0.095367592 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 01 13:44:53 compute-0 podman[278233]: 2025-10-01 13:44:53.582770451 +0000 UTC m=+0.108382803 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:44:53 compute-0 podman[278232]: 2025-10-01 13:44:53.593180369 +0000 UTC m=+0.124420639 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct 01 13:44:53 compute-0 podman[278230]: 2025-10-01 13:44:53.647795094 +0000 UTC m=+0.186120418 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 01 13:44:53 compute-0 ceph-mon[74802]: pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:44:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1564694016' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:44:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:44:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1564694016' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:44:55 compute-0 ceph-mon[74802]: pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1564694016' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:44:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1564694016' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:44:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:44:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:44:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:44:57 compute-0 ceph-mon[74802]: pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:44:59 compute-0 ceph-mon[74802]: pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:01 compute-0 nova_compute[260022]: 2025-10-01 13:45:01.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:01 compute-0 nova_compute[260022]: 2025-10-01 13:45:01.468 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:45:01 compute-0 nova_compute[260022]: 2025-10-01 13:45:01.468 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:45:01 compute-0 nova_compute[260022]: 2025-10-01 13:45:01.468 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:45:01 compute-0 nova_compute[260022]: 2025-10-01 13:45:01.469 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:45:01 compute-0 nova_compute[260022]: 2025-10-01 13:45:01.469 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:45:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:45:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066229925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:45:01 compute-0 nova_compute[260022]: 2025-10-01 13:45:01.930 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:45:01 compute-0 ceph-mon[74802]: pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3066229925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:45:02 compute-0 nova_compute[260022]: 2025-10-01 13:45:02.134 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:45:02 compute-0 nova_compute[260022]: 2025-10-01 13:45:02.135 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5109MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:45:02 compute-0 nova_compute[260022]: 2025-10-01 13:45:02.135 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:45:02 compute-0 nova_compute[260022]: 2025-10-01 13:45:02.136 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:45:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:02 compute-0 nova_compute[260022]: 2025-10-01 13:45:02.857 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 6bc1aa4b-48ff-473e-afdb-d40e73f8c36c has allocations against this compute host but is not found in the database.
Oct 01 13:45:02 compute-0 nova_compute[260022]: 2025-10-01 13:45:02.858 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:45:02 compute-0 nova_compute[260022]: 2025-10-01 13:45:02.858 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:45:02 compute-0 nova_compute[260022]: 2025-10-01 13:45:02.892 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:45:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:45:03 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155356671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:45:03 compute-0 nova_compute[260022]: 2025-10-01 13:45:03.292 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:45:03 compute-0 nova_compute[260022]: 2025-10-01 13:45:03.298 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:45:03 compute-0 nova_compute[260022]: 2025-10-01 13:45:03.328 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:45:03 compute-0 nova_compute[260022]: 2025-10-01 13:45:03.330 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:45:03 compute-0 nova_compute[260022]: 2025-10-01 13:45:03.330 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:45:03 compute-0 ceph-mon[74802]: pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:03 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1155356671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:45:04 compute-0 nova_compute[260022]: 2025-10-01 13:45:04.330 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:04 compute-0 nova_compute[260022]: 2025-10-01 13:45:04.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:05 compute-0 nova_compute[260022]: 2025-10-01 13:45:05.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:05 compute-0 nova_compute[260022]: 2025-10-01 13:45:05.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:45:05 compute-0 ceph-mon[74802]: pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:07 compute-0 nova_compute[260022]: 2025-10-01 13:45:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:07 compute-0 nova_compute[260022]: 2025-10-01 13:45:07.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 13:45:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:07 compute-0 unix_chkpwd[278356]: password check failed for user (root)
Oct 01 13:45:07 compute-0 sshd-session[278354]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144  user=root
Oct 01 13:45:07 compute-0 ceph-mon[74802]: pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:08 compute-0 nova_compute[260022]: 2025-10-01 13:45:08.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:08 compute-0 nova_compute[260022]: 2025-10-01 13:45:08.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:08 compute-0 nova_compute[260022]: 2025-10-01 13:45:08.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 13:45:08 compute-0 nova_compute[260022]: 2025-10-01 13:45:08.373 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 13:45:08 compute-0 nova_compute[260022]: 2025-10-01 13:45:08.374 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:09 compute-0 nova_compute[260022]: 2025-10-01 13:45:09.436 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:09 compute-0 nova_compute[260022]: 2025-10-01 13:45:09.436 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:45:09 compute-0 nova_compute[260022]: 2025-10-01 13:45:09.437 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:45:09 compute-0 nova_compute[260022]: 2025-10-01 13:45:09.460 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:45:09 compute-0 ceph-mon[74802]: pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:10 compute-0 sshd-session[278354]: Failed password for root from 27.254.137.144 port 52790 ssh2
Oct 01 13:45:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:11 compute-0 nova_compute[260022]: 2025-10-01 13:45:11.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:11 compute-0 nova_compute[260022]: 2025-10-01 13:45:11.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:11 compute-0 sshd-session[278354]: Received disconnect from 27.254.137.144 port 52790:11: Bye Bye [preauth]
Oct 01 13:45:11 compute-0 sshd-session[278354]: Disconnected from authenticating user root 27.254.137.144 port 52790 [preauth]
Oct 01 13:45:11 compute-0 ceph-mon[74802]: pgmap v1231: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:45:12.313 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:45:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:45:12.314 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:45:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:45:12.314 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:45:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct 01 13:45:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct 01 13:45:13 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct 01 13:45:13 compute-0 ceph-mon[74802]: pgmap v1232: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:14 compute-0 nova_compute[260022]: 2025-10-01 13:45:14.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:45:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:15 compute-0 ceph-mon[74802]: osdmap e150: 3 total, 3 up, 3 in
Oct 01 13:45:16 compute-0 ceph-mon[74802]: pgmap v1234: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:45:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:45:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:45:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:45:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:45:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:45:18 compute-0 ceph-mon[74802]: pgmap v1235: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:45:20 compute-0 ceph-mon[74802]: pgmap v1236: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:45:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:45:22 compute-0 ceph-mon[74802]: pgmap v1237: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:45:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:45:24 compute-0 ceph-mon[74802]: pgmap v1238: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:45:24 compute-0 podman[278360]: 2025-10-01 13:45:24.554017895 +0000 UTC m=+0.084425057 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Oct 01 13:45:24 compute-0 podman[278358]: 2025-10-01 13:45:24.561682176 +0000 UTC m=+0.099381688 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:45:24 compute-0 podman[278359]: 2025-10-01 13:45:24.579687965 +0000 UTC m=+0.111569533 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:45:24 compute-0 podman[278357]: 2025-10-01 13:45:24.600784371 +0000 UTC m=+0.145898237 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 13:45:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct 01 13:45:26 compute-0 ceph-mon[74802]: pgmap v1239: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct 01 13:45:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 01 13:45:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:28 compute-0 ceph-mon[74802]: pgmap v1240: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 01 13:45:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 01 13:45:30 compute-0 ceph-mon[74802]: pgmap v1241: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 01 13:45:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:45:30.149 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:45:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:45:30.151 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:45:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct 01 13:45:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct 01 13:45:31 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct 01 13:45:32 compute-0 ceph-mon[74802]: pgmap v1242: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:32 compute-0 ceph-mon[74802]: osdmap e151: 3 total, 3 up, 3 in
Oct 01 13:45:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:45:34 compute-0 ceph-mon[74802]: pgmap v1244: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:45:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:45:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:45:35.154 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:45:36 compute-0 ceph-mon[74802]: pgmap v1245: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:45:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:45:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct 01 13:45:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct 01 13:45:37 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct 01 13:45:38 compute-0 ceph-mon[74802]: pgmap v1246: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:45:38 compute-0 ceph-mon[74802]: osdmap e152: 3 total, 3 up, 3 in
Oct 01 13:45:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 13:45:40 compute-0 ceph-mon[74802]: pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 13:45:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 13:45:42 compute-0 ceph-mon[74802]: pgmap v1249: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 13:45:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:44 compute-0 ceph-mon[74802]: pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:46 compute-0 ceph-mon[74802]: pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:45:47
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'images']
Oct 01 13:45:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:45:48 compute-0 ceph-mon[74802]: pgmap v1252: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:49 compute-0 sudo[278439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:45:49 compute-0 sudo[278439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:49 compute-0 sudo[278439]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:49 compute-0 sudo[278464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:45:49 compute-0 sudo[278464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:49 compute-0 sudo[278464]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:49 compute-0 sudo[278489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:45:49 compute-0 sudo[278489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:49 compute-0 sudo[278489]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:49 compute-0 sudo[278514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:45:49 compute-0 sudo[278514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:49 compute-0 sudo[278514]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:45:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:45:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:45:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:45:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:45:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:45:50 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 75af579d-4b9f-4336-b90c-bb41eeb5f7d0 does not exist
Oct 01 13:45:50 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2d3e94e4-b707-40ba-a2f4-f708907d7e4a does not exist
Oct 01 13:45:50 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5c0ec63f-d12d-4e8f-b96b-ba088d175437 does not exist
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.027004) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350027079, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1139, "num_deletes": 252, "total_data_size": 1667958, "memory_usage": 1690480, "flush_reason": "Manual Compaction"}
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct 01 13:45:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:45:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:45:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:45:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:45:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:45:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350040761, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1651378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24592, "largest_seqno": 25730, "table_properties": {"data_size": 1645729, "index_size": 3044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11783, "raw_average_key_size": 19, "raw_value_size": 1634488, "raw_average_value_size": 2765, "num_data_blocks": 136, "num_entries": 591, "num_filter_entries": 591, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326243, "oldest_key_time": 1759326243, "file_creation_time": 1759326350, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 13816 microseconds, and 8707 cpu microseconds.
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.040827) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1651378 bytes OK
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.040852) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.043594) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.043624) EVENT_LOG_v1 {"time_micros": 1759326350043614, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.043652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1662709, prev total WAL file size 1662709, number of live WAL files 2.
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.044948) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1612KB)], [56(7682KB)]
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350045005, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9518682, "oldest_snapshot_seqno": -1}
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4698 keys, 7758107 bytes, temperature: kUnknown
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350083417, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7758107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7725754, "index_size": 19507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 117698, "raw_average_key_size": 25, "raw_value_size": 7639714, "raw_average_value_size": 1626, "num_data_blocks": 806, "num_entries": 4698, "num_filter_entries": 4698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326350, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.083757) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7758107 bytes
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.086547) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 247.1 rd, 201.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(10.5) write-amplify(4.7) OK, records in: 5217, records dropped: 519 output_compression: NoCompression
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.086573) EVENT_LOG_v1 {"time_micros": 1759326350086559, "job": 30, "event": "compaction_finished", "compaction_time_micros": 38529, "compaction_time_cpu_micros": 22900, "output_level": 6, "num_output_files": 1, "total_output_size": 7758107, "num_input_records": 5217, "num_output_records": 4698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350087353, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326350090328, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.044833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:45:50 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:45:50.090518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:45:50 compute-0 sudo[278570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:45:50 compute-0 sudo[278570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:50 compute-0 sudo[278570]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:50 compute-0 ceph-mon[74802]: pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:45:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:45:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:45:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:45:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:45:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:45:50 compute-0 sudo[278595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:45:50 compute-0 sudo[278595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:50 compute-0 sudo[278595]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:50 compute-0 sudo[278620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:45:50 compute-0 sudo[278620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:50 compute-0 sudo[278620]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:50 compute-0 sudo[278645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:45:50 compute-0 sudo[278645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:50 compute-0 podman[278709]: 2025-10-01 13:45:50.738878924 +0000 UTC m=+0.044660641 container create 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 13:45:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:50 compute-0 systemd[1]: Started libpod-conmon-45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e.scope.
Oct 01 13:45:50 compute-0 podman[278709]: 2025-10-01 13:45:50.716982553 +0000 UTC m=+0.022764270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:45:50 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:45:50 compute-0 podman[278709]: 2025-10-01 13:45:50.850964853 +0000 UTC m=+0.156746590 container init 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:45:50 compute-0 podman[278709]: 2025-10-01 13:45:50.860190144 +0000 UTC m=+0.165971841 container start 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:45:50 compute-0 podman[278709]: 2025-10-01 13:45:50.865835502 +0000 UTC m=+0.171617199 container attach 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:45:50 compute-0 elegant_cori[278725]: 167 167
Oct 01 13:45:50 compute-0 systemd[1]: libpod-45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e.scope: Deactivated successfully.
Oct 01 13:45:50 compute-0 podman[278709]: 2025-10-01 13:45:50.86797046 +0000 UTC m=+0.173752147 container died 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:45:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1dd3f8dbace0cbc83d6959de45a5828d854df34c8928606a5c1bb2d5913678b-merged.mount: Deactivated successfully.
Oct 01 13:45:50 compute-0 podman[278709]: 2025-10-01 13:45:50.911762862 +0000 UTC m=+0.217544549 container remove 45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cori, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:45:50 compute-0 systemd[1]: libpod-conmon-45e86f146cf8e52263dc5fd0ff0ee756882d74e4b48ad529b456eda8a49b164e.scope: Deactivated successfully.
Oct 01 13:45:51 compute-0 podman[278750]: 2025-10-01 13:45:51.129890348 +0000 UTC m=+0.059073456 container create 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:45:51 compute-0 systemd[1]: Started libpod-conmon-28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30.scope.
Oct 01 13:45:51 compute-0 podman[278750]: 2025-10-01 13:45:51.101991578 +0000 UTC m=+0.031174736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:45:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:51 compute-0 podman[278750]: 2025-10-01 13:45:51.240958086 +0000 UTC m=+0.170141214 container init 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:45:51 compute-0 podman[278750]: 2025-10-01 13:45:51.257357183 +0000 UTC m=+0.186540281 container start 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 13:45:51 compute-0 podman[278750]: 2025-10-01 13:45:51.261773592 +0000 UTC m=+0.190956700 container attach 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:45:52 compute-0 ceph-mon[74802]: pgmap v1254: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:52 compute-0 kind_golick[278766]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:45:52 compute-0 kind_golick[278766]: --> relative data size: 1.0
Oct 01 13:45:52 compute-0 kind_golick[278766]: --> All data devices are unavailable
Oct 01 13:45:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:52 compute-0 systemd[1]: libpod-28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30.scope: Deactivated successfully.
Oct 01 13:45:52 compute-0 podman[278750]: 2025-10-01 13:45:52.440486126 +0000 UTC m=+1.369669214 container died 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:45:52 compute-0 systemd[1]: libpod-28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30.scope: Consumed 1.127s CPU time.
Oct 01 13:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-db068c65411fac6fc7dca952c4caeddb3e7e0b5772ec02def4689235cf000f88-merged.mount: Deactivated successfully.
Oct 01 13:45:52 compute-0 podman[278750]: 2025-10-01 13:45:52.517041564 +0000 UTC m=+1.446224632 container remove 28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 13:45:52 compute-0 systemd[1]: libpod-conmon-28b6c124e72e86e427e2d6ea3dcf791dc6e8d08884928e5fd4cf3262cd50bf30.scope: Deactivated successfully.
Oct 01 13:45:52 compute-0 sudo[278645]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:52 compute-0 sudo[278809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:45:52 compute-0 sudo[278809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:52 compute-0 sudo[278809]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:52 compute-0 sudo[278834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:45:52 compute-0 sudo[278834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:52 compute-0 sudo[278834]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:52 compute-0 sudo[278859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:45:52 compute-0 sudo[278859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:52 compute-0 sudo[278859]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:52 compute-0 sudo[278884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:45:52 compute-0 sudo[278884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:53 compute-0 podman[278949]: 2025-10-01 13:45:53.309670319 +0000 UTC m=+0.051200088 container create 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:45:53 compute-0 systemd[1]: Started libpod-conmon-677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368.scope.
Oct 01 13:45:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:45:53 compute-0 podman[278949]: 2025-10-01 13:45:53.285992721 +0000 UTC m=+0.027522570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:45:53 compute-0 podman[278949]: 2025-10-01 13:45:53.39365723 +0000 UTC m=+0.135187009 container init 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:45:53 compute-0 podman[278949]: 2025-10-01 13:45:53.4041115 +0000 UTC m=+0.145641299 container start 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:45:53 compute-0 podman[278949]: 2025-10-01 13:45:53.407841398 +0000 UTC m=+0.149371167 container attach 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:45:53 compute-0 relaxed_fermat[278965]: 167 167
Oct 01 13:45:53 compute-0 systemd[1]: libpod-677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368.scope: Deactivated successfully.
Oct 01 13:45:53 compute-0 podman[278949]: 2025-10-01 13:45:53.412908898 +0000 UTC m=+0.154438667 container died 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-840851ab783916b7a621b7eee10a0acd5de93b94a806bec5d0daa61f6af0dd7f-merged.mount: Deactivated successfully.
Oct 01 13:45:53 compute-0 podman[278949]: 2025-10-01 13:45:53.456858555 +0000 UTC m=+0.198388324 container remove 677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_fermat, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:45:53 compute-0 systemd[1]: libpod-conmon-677fcc9a16412a3fc1a02061c59baad9c388069f0c12ca0bc2197d7abe0f5368.scope: Deactivated successfully.
Oct 01 13:45:53 compute-0 podman[278989]: 2025-10-01 13:45:53.676127698 +0000 UTC m=+0.056243016 container create 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:45:53 compute-0 systemd[1]: Started libpod-conmon-6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46.scope.
Oct 01 13:45:53 compute-0 podman[278989]: 2025-10-01 13:45:53.650500359 +0000 UTC m=+0.030615757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:45:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:53 compute-0 podman[278989]: 2025-10-01 13:45:53.766884363 +0000 UTC m=+0.146999681 container init 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:45:53 compute-0 podman[278989]: 2025-10-01 13:45:53.778389626 +0000 UTC m=+0.158504924 container start 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:45:53 compute-0 podman[278989]: 2025-10-01 13:45:53.782937971 +0000 UTC m=+0.163053289 container attach 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:45:54 compute-0 ceph-mon[74802]: pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:54 compute-0 cranky_boyd[279005]: {
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:     "0": [
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:         {
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "devices": [
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "/dev/loop3"
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             ],
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_name": "ceph_lv0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_size": "21470642176",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "name": "ceph_lv0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "tags": {
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.cluster_name": "ceph",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.crush_device_class": "",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.encrypted": "0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.osd_id": "0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.type": "block",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.vdo": "0"
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             },
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "type": "block",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "vg_name": "ceph_vg0"
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:         }
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:     ],
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:     "1": [
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:         {
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "devices": [
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "/dev/loop4"
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             ],
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_name": "ceph_lv1",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_size": "21470642176",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "name": "ceph_lv1",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "tags": {
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.cluster_name": "ceph",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.crush_device_class": "",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.encrypted": "0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.osd_id": "1",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.type": "block",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.vdo": "0"
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             },
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "type": "block",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "vg_name": "ceph_vg1"
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:         }
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:     ],
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:     "2": [
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:         {
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "devices": [
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "/dev/loop5"
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             ],
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_name": "ceph_lv2",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_size": "21470642176",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "name": "ceph_lv2",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "tags": {
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.cluster_name": "ceph",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.crush_device_class": "",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.encrypted": "0",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.osd_id": "2",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.type": "block",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:                 "ceph.vdo": "0"
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             },
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "type": "block",
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:             "vg_name": "ceph_vg2"
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:         }
Oct 01 13:45:54 compute-0 cranky_boyd[279005]:     ]
Oct 01 13:45:54 compute-0 cranky_boyd[279005]: }
Oct 01 13:45:54 compute-0 podman[278989]: 2025-10-01 13:45:54.562096469 +0000 UTC m=+0.942211797 container died 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:45:54 compute-0 systemd[1]: libpod-6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46.scope: Deactivated successfully.
Oct 01 13:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff37bd5d8b59c527b70d049aef69df93dd72d589877a95af01e4b92fd93f67b1-merged.mount: Deactivated successfully.
Oct 01 13:45:54 compute-0 podman[278989]: 2025-10-01 13:45:54.636973853 +0000 UTC m=+1.017089151 container remove 6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 13:45:54 compute-0 systemd[1]: libpod-conmon-6b2ec638db89e1fb5b0c3a0c67117de8c29990dd580f2d311e4f4e34c4f22f46.scope: Deactivated successfully.
Oct 01 13:45:54 compute-0 sudo[278884]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:54 compute-0 podman[279025]: 2025-10-01 13:45:54.711386383 +0000 UTC m=+0.079661247 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Oct 01 13:45:54 compute-0 podman[279016]: 2025-10-01 13:45:54.728026908 +0000 UTC m=+0.122326842 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:45:54 compute-0 podman[279023]: 2025-10-01 13:45:54.728051009 +0000 UTC m=+0.123356566 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 01 13:45:54 compute-0 sudo[279091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:45:54 compute-0 sudo[279091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:54 compute-0 sudo[279091]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:54 compute-0 podman[279039]: 2025-10-01 13:45:54.768236698 +0000 UTC m=+0.132381911 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Oct 01 13:45:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:54 compute-0 sudo[279126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:45:54 compute-0 sudo[279126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:54 compute-0 sudo[279126]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:54 compute-0 sudo[279151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:45:54 compute-0 sudo[279151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:54 compute-0 sudo[279151]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:55 compute-0 sudo[279176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:45:55 compute-0 sudo[279176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:45:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4113137374' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:45:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:45:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4113137374' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:45:55 compute-0 podman[279239]: 2025-10-01 13:45:55.416010069 +0000 UTC m=+0.054817771 container create 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:45:55 compute-0 systemd[1]: Started libpod-conmon-11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68.scope.
Oct 01 13:45:55 compute-0 podman[279239]: 2025-10-01 13:45:55.393639403 +0000 UTC m=+0.032447135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:45:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:45:55 compute-0 podman[279239]: 2025-10-01 13:45:55.523944317 +0000 UTC m=+0.162752029 container init 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:45:55 compute-0 podman[279239]: 2025-10-01 13:45:55.532648062 +0000 UTC m=+0.171455784 container start 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:45:55 compute-0 podman[279239]: 2025-10-01 13:45:55.537178275 +0000 UTC m=+0.175985967 container attach 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:45:55 compute-0 gallant_wing[279255]: 167 167
Oct 01 13:45:55 compute-0 systemd[1]: libpod-11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68.scope: Deactivated successfully.
Oct 01 13:45:55 compute-0 podman[279239]: 2025-10-01 13:45:55.543614138 +0000 UTC m=+0.182421840 container died 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-845b779a0aae1d8278706b21de4e9d50cde171f1b0dbce963a096f5c0aa0c99d-merged.mount: Deactivated successfully.
Oct 01 13:45:55 compute-0 podman[279239]: 2025-10-01 13:45:55.595910229 +0000 UTC m=+0.234717941 container remove 11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wing, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:45:55 compute-0 systemd[1]: libpod-conmon-11ea5a4053ab18860805a7f1acf3c9f187bcaf69764f22dfc12c3b33eb69af68.scope: Deactivated successfully.
Oct 01 13:45:55 compute-0 podman[279278]: 2025-10-01 13:45:55.838069384 +0000 UTC m=+0.072619294 container create 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:45:55 compute-0 systemd[1]: Started libpod-conmon-5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646.scope.
Oct 01 13:45:55 compute-0 podman[279278]: 2025-10-01 13:45:55.807876491 +0000 UTC m=+0.042426471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:45:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:45:55 compute-0 podman[279278]: 2025-10-01 13:45:55.943459142 +0000 UTC m=+0.178009022 container init 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:45:55 compute-0 podman[279278]: 2025-10-01 13:45:55.956661248 +0000 UTC m=+0.191211138 container start 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:45:55 compute-0 podman[279278]: 2025-10-01 13:45:55.96080743 +0000 UTC m=+0.195357370 container attach 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:45:56 compute-0 ceph-mon[74802]: pgmap v1256: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/4113137374' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:45:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/4113137374' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:45:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]: {
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "osd_id": 0,
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "type": "bluestore"
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:     },
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "osd_id": 2,
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "type": "bluestore"
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:     },
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "osd_id": 1,
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:         "type": "bluestore"
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]:     }
Oct 01 13:45:57 compute-0 pedantic_merkle[279295]: }
Oct 01 13:45:57 compute-0 systemd[1]: libpod-5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646.scope: Deactivated successfully.
Oct 01 13:45:57 compute-0 podman[279278]: 2025-10-01 13:45:57.046717894 +0000 UTC m=+1.281267794 container died 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:45:57 compute-0 systemd[1]: libpod-5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646.scope: Consumed 1.096s CPU time.
Oct 01 13:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0dc5403736ffe499903a5cf6d16dfb930e1c59dcb59855dabe0ab55f99ba1ef-merged.mount: Deactivated successfully.
Oct 01 13:45:57 compute-0 podman[279278]: 2025-10-01 13:45:57.103489235 +0000 UTC m=+1.338039115 container remove 5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:45:57 compute-0 systemd[1]: libpod-conmon-5348d5d946949bd6f70f16c3bf88140f99fa25956da5cde94b34711db5d09646.scope: Deactivated successfully.
Oct 01 13:45:57 compute-0 sudo[279176]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:45:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:45:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:45:57 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev b453c2e3-1064-48e8-8d05-46ceff8f3127 does not exist
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ae40c87-7e53-4852-aff3-653038d07db2 does not exist
Oct 01 13:45:57 compute-0 sudo[279340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:45:57 compute-0 sudo[279340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:57 compute-0 sudo[279340]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:57 compute-0 sudo[279365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:45:57 compute-0 sudo[279365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:45:57 compute-0 sudo[279365]: pam_unix(sudo:session): session closed for user root
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:45:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:45:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:45:58 compute-0 ceph-mon[74802]: pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:45:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:45:58 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:45:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:00 compute-0 ceph-mon[74802]: pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:02 compute-0 ceph-mon[74802]: pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:02 compute-0 nova_compute[260022]: 2025-10-01 13:46:02.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:03 compute-0 nova_compute[260022]: 2025-10-01 13:46:03.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:03 compute-0 nova_compute[260022]: 2025-10-01 13:46:03.465 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:46:03 compute-0 nova_compute[260022]: 2025-10-01 13:46:03.465 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:46:03 compute-0 nova_compute[260022]: 2025-10-01 13:46:03.466 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:46:03 compute-0 nova_compute[260022]: 2025-10-01 13:46:03.466 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:46:03 compute-0 nova_compute[260022]: 2025-10-01 13:46:03.466 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:46:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:46:03 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268064282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:46:03 compute-0 nova_compute[260022]: 2025-10-01 13:46:03.893 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.053 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.054 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.054 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.054 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:46:04 compute-0 ceph-mon[74802]: pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:04 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1268064282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.398 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.398 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.413 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.478 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.478 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.497 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.520 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 13:46:04 compute-0 nova_compute[260022]: 2025-10-01 13:46:04.535 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:46:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:46:04 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652188545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:46:05 compute-0 nova_compute[260022]: 2025-10-01 13:46:05.000 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:46:05 compute-0 nova_compute[260022]: 2025-10-01 13:46:05.006 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:46:05 compute-0 nova_compute[260022]: 2025-10-01 13:46:05.178 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:46:05 compute-0 nova_compute[260022]: 2025-10-01 13:46:05.182 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:46:05 compute-0 nova_compute[260022]: 2025-10-01 13:46:05.182 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:46:05 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2652188545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:46:06 compute-0 ceph-mon[74802]: pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:07 compute-0 ceph-mon[74802]: pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:08 compute-0 nova_compute[260022]: 2025-10-01 13:46:08.178 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:08 compute-0 nova_compute[260022]: 2025-10-01 13:46:08.179 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:08 compute-0 nova_compute[260022]: 2025-10-01 13:46:08.179 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:46:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:09 compute-0 nova_compute[260022]: 2025-10-01 13:46:09.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:09 compute-0 nova_compute[260022]: 2025-10-01 13:46:09.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:46:09 compute-0 nova_compute[260022]: 2025-10-01 13:46:09.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:46:09 compute-0 nova_compute[260022]: 2025-10-01 13:46:09.396 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:46:09 compute-0 nova_compute[260022]: 2025-10-01 13:46:09.396 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:09 compute-0 ceph-mon[74802]: pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:11 compute-0 ceph-mon[74802]: pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:46:12.314 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:46:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:46:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:46:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:46:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:46:12 compute-0 nova_compute[260022]: 2025-10-01 13:46:12.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:13 compute-0 nova_compute[260022]: 2025-10-01 13:46:13.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:13 compute-0 ceph-mon[74802]: pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:15 compute-0 nova_compute[260022]: 2025-10-01 13:46:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:15 compute-0 ceph-mon[74802]: pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:17 compute-0 nova_compute[260022]: 2025-10-01 13:46:17.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:46:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:46:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:46:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:46:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:46:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:46:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:46:17 compute-0 ceph-mon[74802]: pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:19 compute-0 ceph-mon[74802]: pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:20 compute-0 sshd-session[279434]: Invalid user chromeuser from 27.254.137.144 port 48330
Oct 01 13:46:20 compute-0 sshd-session[279434]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:46:20 compute-0 sshd-session[279434]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=27.254.137.144
Oct 01 13:46:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:21 compute-0 sshd-session[279434]: Failed password for invalid user chromeuser from 27.254.137.144 port 48330 ssh2
Oct 01 13:46:21 compute-0 ceph-mon[74802]: pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:23 compute-0 sshd-session[279434]: Received disconnect from 27.254.137.144 port 48330:11: Bye Bye [preauth]
Oct 01 13:46:23 compute-0 sshd-session[279434]: Disconnected from invalid user chromeuser 27.254.137.144 port 48330 [preauth]
Oct 01 13:46:23 compute-0 ceph-mon[74802]: pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:25 compute-0 podman[279438]: 2025-10-01 13:46:25.557074865 +0000 UTC m=+0.088647397 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:46:25 compute-0 podman[279437]: 2025-10-01 13:46:25.557241661 +0000 UTC m=+0.092262253 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct 01 13:46:25 compute-0 podman[279439]: 2025-10-01 13:46:25.596724575 +0000 UTC m=+0.118219217 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Oct 01 13:46:25 compute-0 podman[279436]: 2025-10-01 13:46:25.597047186 +0000 UTC m=+0.140001960 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923)
Oct 01 13:46:25 compute-0 ceph-mon[74802]: pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:27 compute-0 ceph-mon[74802]: pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:29 compute-0 ceph-mon[74802]: pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:31 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:46:31.443 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:46:31 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:46:31.445 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:46:31 compute-0 ceph-mon[74802]: pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:33 compute-0 ceph-mon[74802]: pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:35 compute-0 ceph-mon[74802]: pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:36 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:46:36.447 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:46:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:38 compute-0 ceph-mon[74802]: pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:40 compute-0 ceph-mon[74802]: pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:42 compute-0 ceph-mon[74802]: pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:44 compute-0 ceph-mon[74802]: pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:46 compute-0 ceph-mon[74802]: pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:46:47
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'backups', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 01 13:46:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:46:48 compute-0 ceph-mon[74802]: pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:46:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:50 compute-0 ceph-mon[74802]: pgmap v1283: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:52 compute-0 ceph-mon[74802]: pgmap v1284: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:54 compute-0 ceph-mon[74802]: pgmap v1285: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:46:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3312553878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:46:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:46:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3312553878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:46:56 compute-0 ceph-mon[74802]: pgmap v1286: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3312553878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:46:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3312553878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:46:56 compute-0 podman[279521]: 2025-10-01 13:46:56.543990749 +0000 UTC m=+0.088030328 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.build-date=20250923, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:46:56 compute-0 podman[279523]: 2025-10-01 13:46:56.546328204 +0000 UTC m=+0.085486428 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:46:56 compute-0 podman[279522]: 2025-10-01 13:46:56.557907371 +0000 UTC m=+0.090338091 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 01 13:46:56 compute-0 podman[279520]: 2025-10-01 13:46:56.573540458 +0000 UTC m=+0.115382197 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 13:46:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:46:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:46:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:46:57 compute-0 sudo[279605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:46:57 compute-0 sudo[279605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:46:57 compute-0 sudo[279605]: pam_unix(sudo:session): session closed for user root
Oct 01 13:46:57 compute-0 sudo[279630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:46:57 compute-0 sudo[279630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:46:57 compute-0 sudo[279630]: pam_unix(sudo:session): session closed for user root
Oct 01 13:46:57 compute-0 sudo[279655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:46:57 compute-0 sudo[279655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:46:57 compute-0 sudo[279655]: pam_unix(sudo:session): session closed for user root
Oct 01 13:46:57 compute-0 sudo[279680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:46:57 compute-0 sudo[279680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:46:58 compute-0 sudo[279680]: pam_unix(sudo:session): session closed for user root
Oct 01 13:46:58 compute-0 ceph-mon[74802]: pgmap v1287: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:46:58 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:46:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:46:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:46:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:46:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:46:58 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2245bf68-ea2b-4f19-9320-ae6348cf57fa does not exist
Oct 01 13:46:58 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3d3cf343-1157-4449-8212-e1fdd3a03718 does not exist
Oct 01 13:46:58 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a61faa59-1e9f-4137-8a8a-2d9d4c052011 does not exist
Oct 01 13:46:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:46:58 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:46:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:46:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:46:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:46:58 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:46:58 compute-0 sudo[279734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:46:58 compute-0 sudo[279734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:46:58 compute-0 sudo[279734]: pam_unix(sudo:session): session closed for user root
Oct 01 13:46:58 compute-0 sudo[279759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:46:58 compute-0 sudo[279759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:46:58 compute-0 sudo[279759]: pam_unix(sudo:session): session closed for user root
Oct 01 13:46:58 compute-0 sudo[279784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:46:58 compute-0 sudo[279784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:46:58 compute-0 sudo[279784]: pam_unix(sudo:session): session closed for user root
Oct 01 13:46:58 compute-0 sudo[279809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:46:58 compute-0 sudo[279809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:46:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:46:58 compute-0 podman[279877]: 2025-10-01 13:46:58.9238835 +0000 UTC m=+0.068850819 container create f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 13:46:58 compute-0 systemd[1]: Started libpod-conmon-f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca.scope.
Oct 01 13:46:58 compute-0 podman[279877]: 2025-10-01 13:46:58.888676921 +0000 UTC m=+0.033644330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:46:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:46:59 compute-0 podman[279877]: 2025-10-01 13:46:59.03121696 +0000 UTC m=+0.176184309 container init f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:46:59 compute-0 podman[279877]: 2025-10-01 13:46:59.040706262 +0000 UTC m=+0.185673611 container start f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:46:59 compute-0 podman[279877]: 2025-10-01 13:46:59.045538556 +0000 UTC m=+0.190505875 container attach f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 13:46:59 compute-0 funny_mahavira[279893]: 167 167
Oct 01 13:46:59 compute-0 podman[279877]: 2025-10-01 13:46:59.048879332 +0000 UTC m=+0.193846681 container died f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:46:59 compute-0 systemd[1]: libpod-f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca.scope: Deactivated successfully.
Oct 01 13:46:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c059f40f72b3a23614cd069d36d065f62517dd5ff9c5ed8d19d44ebb46b99280-merged.mount: Deactivated successfully.
Oct 01 13:46:59 compute-0 podman[279877]: 2025-10-01 13:46:59.103493417 +0000 UTC m=+0.248460746 container remove f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:46:59 compute-0 systemd[1]: libpod-conmon-f6bf963a6766858b6d50c4521307cf31233d5da57da14a923c24c65afcdd6cca.scope: Deactivated successfully.
Oct 01 13:46:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:46:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:46:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:46:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:46:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:46:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:46:59 compute-0 podman[279917]: 2025-10-01 13:46:59.332191834 +0000 UTC m=+0.044286999 container create 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:46:59 compute-0 systemd[1]: Started libpod-conmon-04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625.scope.
Oct 01 13:46:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:46:59 compute-0 podman[279917]: 2025-10-01 13:46:59.314180392 +0000 UTC m=+0.026275587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:46:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:46:59 compute-0 podman[279917]: 2025-10-01 13:46:59.427260465 +0000 UTC m=+0.139355660 container init 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:46:59 compute-0 podman[279917]: 2025-10-01 13:46:59.440845667 +0000 UTC m=+0.152940832 container start 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:46:59 compute-0 podman[279917]: 2025-10-01 13:46:59.44505752 +0000 UTC m=+0.157152685 container attach 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:47:00 compute-0 ceph-mon[74802]: pgmap v1288: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:00 compute-0 sweet_hopper[279934]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:47:00 compute-0 sweet_hopper[279934]: --> relative data size: 1.0
Oct 01 13:47:00 compute-0 sweet_hopper[279934]: --> All data devices are unavailable
Oct 01 13:47:00 compute-0 systemd[1]: libpod-04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625.scope: Deactivated successfully.
Oct 01 13:47:00 compute-0 systemd[1]: libpod-04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625.scope: Consumed 1.229s CPU time.
Oct 01 13:47:00 compute-0 podman[279917]: 2025-10-01 13:47:00.711851283 +0000 UTC m=+1.423946468 container died 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:47:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-66e73cd91cd946eba76520a23f13f8af19c24adb3f87cac98b4edb1f509f1d0b-merged.mount: Deactivated successfully.
Oct 01 13:47:00 compute-0 podman[279917]: 2025-10-01 13:47:00.787376473 +0000 UTC m=+1.499471668 container remove 04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:47:00 compute-0 systemd[1]: libpod-conmon-04be99592cf2215fe97fca778409f7e2eb9ac9d7852b086f36dab7d6d3d92625.scope: Deactivated successfully.
Oct 01 13:47:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:00 compute-0 sudo[279809]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:00 compute-0 sudo[279976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:47:00 compute-0 sudo[279976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:00 compute-0 sudo[279976]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:00 compute-0 sudo[280001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:47:00 compute-0 sudo[280001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:00 compute-0 sudo[280001]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:01 compute-0 sudo[280026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:47:01 compute-0 sudo[280026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:01 compute-0 sudo[280026]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:01 compute-0 sudo[280051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:47:01 compute-0 sudo[280051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:01 compute-0 podman[280118]: 2025-10-01 13:47:01.628597742 +0000 UTC m=+0.069509820 container create fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:47:01 compute-0 systemd[1]: Started libpod-conmon-fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842.scope.
Oct 01 13:47:01 compute-0 podman[280118]: 2025-10-01 13:47:01.602914126 +0000 UTC m=+0.043826304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:47:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:47:01 compute-0 podman[280118]: 2025-10-01 13:47:01.718921072 +0000 UTC m=+0.159833170 container init fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 13:47:01 compute-0 podman[280118]: 2025-10-01 13:47:01.727082361 +0000 UTC m=+0.167994439 container start fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:47:01 compute-0 podman[280118]: 2025-10-01 13:47:01.730948445 +0000 UTC m=+0.171860543 container attach fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:47:01 compute-0 lucid_antonelli[280135]: 167 167
Oct 01 13:47:01 compute-0 systemd[1]: libpod-fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842.scope: Deactivated successfully.
Oct 01 13:47:01 compute-0 podman[280118]: 2025-10-01 13:47:01.735951893 +0000 UTC m=+0.176863971 container died fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-39f3897de954edc3c5cff937ac9dbd13afe7e020150ce3042b4bbfda72cf1a35-merged.mount: Deactivated successfully.
Oct 01 13:47:01 compute-0 podman[280118]: 2025-10-01 13:47:01.772755783 +0000 UTC m=+0.213667861 container remove fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:47:01 compute-0 systemd[1]: libpod-conmon-fa2f8a461088e7fe8484d6638941a197451cf3ac516d40ef4f7701aa0e5a4842.scope: Deactivated successfully.
Oct 01 13:47:01 compute-0 podman[280159]: 2025-10-01 13:47:01.956801081 +0000 UTC m=+0.060262006 container create f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:47:01 compute-0 systemd[1]: Started libpod-conmon-f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd.scope.
Oct 01 13:47:02 compute-0 podman[280159]: 2025-10-01 13:47:01.930082802 +0000 UTC m=+0.033543817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:47:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:47:02 compute-0 podman[280159]: 2025-10-01 13:47:02.059767762 +0000 UTC m=+0.163228687 container init f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:47:02 compute-0 podman[280159]: 2025-10-01 13:47:02.069505262 +0000 UTC m=+0.172966187 container start f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:47:02 compute-0 podman[280159]: 2025-10-01 13:47:02.073000293 +0000 UTC m=+0.176461218 container attach f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct 01 13:47:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct 01 13:47:02 compute-0 ceph-mon[74802]: pgmap v1289: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct 01 13:47:02 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct 01 13:47:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]: {
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:     "0": [
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:         {
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "devices": [
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "/dev/loop3"
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             ],
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_name": "ceph_lv0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_size": "21470642176",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "name": "ceph_lv0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "tags": {
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.cluster_name": "ceph",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.crush_device_class": "",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.encrypted": "0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.osd_id": "0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.type": "block",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.vdo": "0"
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             },
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "type": "block",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "vg_name": "ceph_vg0"
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:         }
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:     ],
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:     "1": [
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:         {
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "devices": [
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "/dev/loop4"
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             ],
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_name": "ceph_lv1",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_size": "21470642176",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "name": "ceph_lv1",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "tags": {
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.cluster_name": "ceph",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.crush_device_class": "",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.encrypted": "0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.osd_id": "1",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.type": "block",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.vdo": "0"
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             },
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "type": "block",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "vg_name": "ceph_vg1"
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:         }
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:     ],
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:     "2": [
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:         {
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "devices": [
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "/dev/loop5"
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             ],
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_name": "ceph_lv2",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_size": "21470642176",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "name": "ceph_lv2",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "tags": {
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.cluster_name": "ceph",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.crush_device_class": "",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.encrypted": "0",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.osd_id": "2",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.type": "block",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:                 "ceph.vdo": "0"
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             },
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "type": "block",
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:             "vg_name": "ceph_vg2"
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:         }
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]:     ]
Oct 01 13:47:02 compute-0 friendly_blackburn[280176]: }
Oct 01 13:47:02 compute-0 systemd[1]: libpod-f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd.scope: Deactivated successfully.
Oct 01 13:47:02 compute-0 podman[280159]: 2025-10-01 13:47:02.946296622 +0000 UTC m=+1.049757557 container died f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 13:47:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-325b9d651ed26a12d938cbbd9241eef6c4f536c4e5379c9675a7683a87bb7be8-merged.mount: Deactivated successfully.
Oct 01 13:47:03 compute-0 podman[280159]: 2025-10-01 13:47:03.012869118 +0000 UTC m=+1.116330043 container remove f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackburn, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:47:03 compute-0 systemd[1]: libpod-conmon-f95141466ccc073a3db0215ac95c781520ea0886d0c701265637578ea102c0dd.scope: Deactivated successfully.
Oct 01 13:47:03 compute-0 sudo[280051]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:03 compute-0 sudo[280199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:47:03 compute-0 sudo[280199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:03 compute-0 sudo[280199]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:03 compute-0 sudo[280224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:47:03 compute-0 sudo[280224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:03 compute-0 sudo[280224]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:03 compute-0 ceph-mon[74802]: osdmap e153: 3 total, 3 up, 3 in
Oct 01 13:47:03 compute-0 ceph-mon[74802]: pgmap v1291: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:47:03 compute-0 sudo[280249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:47:03 compute-0 sudo[280249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:03 compute-0 sudo[280249]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:03 compute-0 nova_compute[260022]: 2025-10-01 13:47:03.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:47:03 compute-0 sudo[280274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:47:03 compute-0 sudo[280274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:03 compute-0 podman[280343]: 2025-10-01 13:47:03.727449444 +0000 UTC m=+0.039604010 container create 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:47:03 compute-0 systemd[1]: Started libpod-conmon-49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3.scope.
Oct 01 13:47:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:47:03 compute-0 podman[280343]: 2025-10-01 13:47:03.709450561 +0000 UTC m=+0.021605137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:47:03 compute-0 podman[280343]: 2025-10-01 13:47:03.809259062 +0000 UTC m=+0.121413628 container init 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:47:03 compute-0 podman[280343]: 2025-10-01 13:47:03.816650608 +0000 UTC m=+0.128805164 container start 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:47:03 compute-0 podman[280343]: 2025-10-01 13:47:03.819568881 +0000 UTC m=+0.131723437 container attach 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:47:03 compute-0 compassionate_goldstine[280359]: 167 167
Oct 01 13:47:03 compute-0 systemd[1]: libpod-49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3.scope: Deactivated successfully.
Oct 01 13:47:03 compute-0 podman[280343]: 2025-10-01 13:47:03.828136073 +0000 UTC m=+0.140290649 container died 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:47:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8db80256abd2848d755fa135a4bb8bdb74bc165263f7902a0048abbb44185303-merged.mount: Deactivated successfully.
Oct 01 13:47:03 compute-0 podman[280343]: 2025-10-01 13:47:03.867524085 +0000 UTC m=+0.179678641 container remove 49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:47:03 compute-0 systemd[1]: libpod-conmon-49a1664bd40af87e997666b73bbc84a5a3bfa2ea9efa22016b5368a90525efb3.scope: Deactivated successfully.
Oct 01 13:47:04 compute-0 podman[280383]: 2025-10-01 13:47:04.069273154 +0000 UTC m=+0.056963040 container create 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:47:04 compute-0 systemd[1]: Started libpod-conmon-8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf.scope.
Oct 01 13:47:04 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:47:04 compute-0 podman[280383]: 2025-10-01 13:47:04.050048213 +0000 UTC m=+0.037738119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:47:04 compute-0 podman[280383]: 2025-10-01 13:47:04.16199632 +0000 UTC m=+0.149686256 container init 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:47:04 compute-0 podman[280383]: 2025-10-01 13:47:04.168942682 +0000 UTC m=+0.156632588 container start 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:47:04 compute-0 podman[280383]: 2025-10-01 13:47:04.172552597 +0000 UTC m=+0.160242483 container attach 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:47:04 compute-0 nova_compute[260022]: 2025-10-01 13:47:04.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:47:04 compute-0 nova_compute[260022]: 2025-10-01 13:47:04.371 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:47:04 compute-0 nova_compute[260022]: 2025-10-01 13:47:04.371 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:47:04 compute-0 nova_compute[260022]: 2025-10-01 13:47:04.372 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:47:04 compute-0 nova_compute[260022]: 2025-10-01 13:47:04.372 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:47:04 compute-0 nova_compute[260022]: 2025-10-01 13:47:04.372 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:47:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:47:04 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271199026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:47:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:47:04 compute-0 nova_compute[260022]: 2025-10-01 13:47:04.826 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:47:04 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2271199026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.045 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.046 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5064MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.047 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.047 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.145 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.146 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.146 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:47:05 compute-0 festive_hoover[280399]: {
Oct 01 13:47:05 compute-0 festive_hoover[280399]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "osd_id": 0,
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "type": "bluestore"
Oct 01 13:47:05 compute-0 festive_hoover[280399]:     },
Oct 01 13:47:05 compute-0 festive_hoover[280399]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "osd_id": 2,
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "type": "bluestore"
Oct 01 13:47:05 compute-0 festive_hoover[280399]:     },
Oct 01 13:47:05 compute-0 festive_hoover[280399]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "osd_id": 1,
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:47:05 compute-0 festive_hoover[280399]:         "type": "bluestore"
Oct 01 13:47:05 compute-0 festive_hoover[280399]:     }
Oct 01 13:47:05 compute-0 festive_hoover[280399]: }
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.184 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:47:05 compute-0 systemd[1]: libpod-8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf.scope: Deactivated successfully.
Oct 01 13:47:05 compute-0 podman[280383]: 2025-10-01 13:47:05.202343248 +0000 UTC m=+1.190033134 container died 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:47:05 compute-0 systemd[1]: libpod-8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf.scope: Consumed 1.010s CPU time.
Oct 01 13:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-66cf05207816a2b750f4be31120794ff164b981f36c5e4c049e27145845e2b63-merged.mount: Deactivated successfully.
Oct 01 13:47:05 compute-0 podman[280383]: 2025-10-01 13:47:05.257405167 +0000 UTC m=+1.245095053 container remove 8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:47:05 compute-0 systemd[1]: libpod-conmon-8e0f95360219c9d38cc87f59f76659a6f5e7246c233bfbf7b41df216e628afcf.scope: Deactivated successfully.
Oct 01 13:47:05 compute-0 sudo[280274]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:47:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:47:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:47:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:47:05 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 354bc6d8-eac4-45e9-bad5-cedf753ad956 does not exist
Oct 01 13:47:05 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6d333d67-577c-44d6-9943-844da9a5b104 does not exist
Oct 01 13:47:05 compute-0 sudo[280484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:47:05 compute-0 sudo[280484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:05 compute-0 sudo[280484]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:05 compute-0 sudo[280511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:47:05 compute-0 sudo[280511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:47:05 compute-0 sudo[280511]: pam_unix(sudo:session): session closed for user root
Oct 01 13:47:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:47:05 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478062052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.651 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.657 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.670 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.672 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:47:05 compute-0 nova_compute[260022]: 2025-10-01 13:47:05.672 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:47:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct 01 13:47:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct 01 13:47:05 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct 01 13:47:05 compute-0 ceph-mon[74802]: pgmap v1292: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 13:47:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:47:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:47:05 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/478062052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:47:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 13:47:06 compute-0 ceph-mon[74802]: osdmap e154: 3 total, 3 up, 3 in
Oct 01 13:47:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:07 compute-0 ceph-mon[74802]: pgmap v1294: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 13:47:08 compute-0 nova_compute[260022]: 2025-10-01 13:47:08.668 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:47:08 compute-0 nova_compute[260022]: 2025-10-01 13:47:08.668 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:47:08 compute-0 nova_compute[260022]: 2025-10-01 13:47:08.669 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:47:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 37 op/s
Oct 01 13:47:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct 01 13:47:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct 01 13:47:08 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct 01 13:47:09 compute-0 ceph-mon[74802]: pgmap v1295: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 37 op/s
Oct 01 13:47:09 compute-0 ceph-mon[74802]: osdmap e155: 3 total, 3 up, 3 in
Oct 01 13:47:10 compute-0 nova_compute[260022]: 2025-10-01 13:47:10.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:47:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 13:47:11 compute-0 nova_compute[260022]: 2025-10-01 13:47:11.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:47:11 compute-0 nova_compute[260022]: 2025-10-01 13:47:11.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:47:11 compute-0 nova_compute[260022]: 2025-10-01 13:47:11.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:47:11 compute-0 nova_compute[260022]: 2025-10-01 13:47:11.451 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:47:11 compute-0 ceph-mon[74802]: pgmap v1297: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 13:47:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:47:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:47:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:47:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:47:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:47:12.315 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:47:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 37 op/s
Oct 01 13:47:13 compute-0 nova_compute[260022]: 2025-10-01 13:47:13.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:47:13 compute-0 ceph-mon[74802]: pgmap v1298: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 KiB/s wr, 37 op/s
Oct 01 13:47:14 compute-0 nova_compute[260022]: 2025-10-01 13:47:14.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:47:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.6 KiB/s wr, 33 op/s
Oct 01 13:47:15 compute-0 ceph-mon[74802]: pgmap v1299: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.6 KiB/s wr, 33 op/s
Oct 01 13:47:16 compute-0 nova_compute[260022]: 2025-10-01 13:47:16.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:47:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Oct 01 13:47:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct 01 13:47:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct 01 13:47:16 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct 01 13:47:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:47:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:47:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:47:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:47:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:47:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:47:17 compute-0 ceph-mon[74802]: pgmap v1300: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Oct 01 13:47:17 compute-0 ceph-mon[74802]: osdmap e156: 3 total, 3 up, 3 in
Oct 01 13:47:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct 01 13:47:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct 01 13:47:17 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct 01 13:47:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.2 KiB/s wr, 46 op/s
Oct 01 13:47:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct 01 13:47:18 compute-0 ceph-mon[74802]: osdmap e157: 3 total, 3 up, 3 in
Oct 01 13:47:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct 01 13:47:18 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct 01 13:47:19 compute-0 ceph-mon[74802]: pgmap v1303: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.2 KiB/s wr, 46 op/s
Oct 01 13:47:19 compute-0 ceph-mon[74802]: osdmap e158: 3 total, 3 up, 3 in
Oct 01 13:47:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 37 op/s
Oct 01 13:47:21 compute-0 ceph-mon[74802]: pgmap v1305: 305 pgs: 305 active+clean; 41 MiB data, 191 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 37 op/s
Oct 01 13:47:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct 01 13:47:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct 01 13:47:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct 01 13:47:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 7.2 KiB/s wr, 127 op/s
Oct 01 13:47:23 compute-0 ceph-mon[74802]: osdmap e159: 3 total, 3 up, 3 in
Oct 01 13:47:23 compute-0 ceph-mon[74802]: pgmap v1307: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 7.2 KiB/s wr, 127 op/s
Oct 01 13:47:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.7 KiB/s wr, 76 op/s
Oct 01 13:47:25 compute-0 ceph-mon[74802]: pgmap v1308: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.7 KiB/s wr, 76 op/s
Oct 01 13:47:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Oct 01 13:47:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct 01 13:47:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct 01 13:47:27 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct 01 13:47:27 compute-0 podman[280542]: 2025-10-01 13:47:27.541769407 +0000 UTC m=+0.083014858 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 01 13:47:27 compute-0 podman[280544]: 2025-10-01 13:47:27.541916772 +0000 UTC m=+0.074731315 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:47:27 compute-0 podman[280541]: 2025-10-01 13:47:27.586379684 +0000 UTC m=+0.136543889 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:47:27 compute-0 podman[280540]: 2025-10-01 13:47:27.60261033 +0000 UTC m=+0.153611492 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:47:27 compute-0 ceph-mon[74802]: pgmap v1309: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Oct 01 13:47:27 compute-0 ceph-mon[74802]: osdmap e160: 3 total, 3 up, 3 in
Oct 01 13:47:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Oct 01 13:47:29 compute-0 ceph-mon[74802]: pgmap v1311: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Oct 01 13:47:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Oct 01 13:47:31 compute-0 ceph-mon[74802]: pgmap v1312: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Oct 01 13:47:32 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:47:32.414 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:47:32 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:47:32.415 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:47:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:33 compute-0 ceph-mon[74802]: pgmap v1313: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:35 compute-0 ceph-mon[74802]: pgmap v1314: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:37 compute-0 ceph-mon[74802]: pgmap v1315: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:39 compute-0 ceph-mon[74802]: pgmap v1316: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:41 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:47:41.419 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:47:41 compute-0 ceph-mon[74802]: pgmap v1317: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:43 compute-0 ceph-mon[74802]: pgmap v1318: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:45 compute-0 ceph-mon[74802]: pgmap v1319: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:47:47
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.meta']
Oct 01 13:47:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:47:47 compute-0 ceph-mon[74802]: pgmap v1320: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:47:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:49 compute-0 ceph-mon[74802]: pgmap v1321: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:51 compute-0 ceph-mon[74802]: pgmap v1322: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:53 compute-0 ceph-mon[74802]: pgmap v1323: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:47:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1882943040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:47:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:47:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1882943040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:47:55 compute-0 sshd-session[280624]: error: kex_exchange_identification: read: Connection timed out
Oct 01 13:47:55 compute-0 sshd-session[280624]: banner exchange: Connection from 14.103.127.7 port 38808: Connection timed out
Oct 01 13:47:56 compute-0 ceph-mon[74802]: pgmap v1324: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1882943040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:47:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1882943040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:47:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:47:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:47:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:47:58 compute-0 ceph-mon[74802]: pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:47:58 compute-0 podman[280629]: 2025-10-01 13:47:58.528278343 +0000 UTC m=+0.060225596 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 01 13:47:58 compute-0 podman[280627]: 2025-10-01 13:47:58.538355102 +0000 UTC m=+0.078384551 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:47:58 compute-0 podman[280628]: 2025-10-01 13:47:58.540569223 +0000 UTC m=+0.070730928 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:47:58 compute-0 podman[280626]: 2025-10-01 13:47:58.565882877 +0000 UTC m=+0.110899515 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 01 13:47:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:00 compute-0 ceph-mon[74802]: pgmap v1326: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:48:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 5963 writes, 26K keys, 5963 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 5963 writes, 5963 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1384 writes, 6249 keys, 1384 commit groups, 1.0 writes per commit group, ingest: 9.02 MB, 0.02 MB/s
                                           Interval WAL: 1384 writes, 1384 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.0      1.98              0.12        15    0.132       0      0       0.0       0.0
                                             L6      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     37.6     30.9      3.30              0.38        14    0.235     64K   7722       0.0       0.0
                                            Sum      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     23.5     24.9      5.28              0.50        29    0.182     64K   7722       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     98.4     99.0      0.40              0.16         8    0.050     21K   2560       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     37.6     30.9      3.30              0.38        14    0.235     64K   7722       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.0      1.97              0.12        14    0.141       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.029, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 5.3 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 304.00 MB usage: 13.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000237 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(856,12.88 MB,4.23537%) FilterBlock(30,185.05 KB,0.059444%) IndexBlock(30,341.62 KB,0.109743%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 13:48:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:02 compute-0 ceph-mon[74802]: pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:04 compute-0 ceph-mon[74802]: pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:04 compute-0 nova_compute[260022]: 2025-10-01 13:48:04.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:04 compute-0 nova_compute[260022]: 2025-10-01 13:48:04.348 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:04 compute-0 nova_compute[260022]: 2025-10-01 13:48:04.405 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:48:04 compute-0 nova_compute[260022]: 2025-10-01 13:48:04.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:48:04 compute-0 nova_compute[260022]: 2025-10-01 13:48:04.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:48:04 compute-0 nova_compute[260022]: 2025-10-01 13:48:04.406 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:48:04 compute-0 nova_compute[260022]: 2025-10-01 13:48:04.407 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:48:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:48:04 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330084154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:48:04 compute-0 nova_compute[260022]: 2025-10-01 13:48:04.833 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:48:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.035 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.036 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.037 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.037 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:48:05 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3330084154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.197 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.198 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.198 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.241 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:48:05 compute-0 sudo[280749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:48:05 compute-0 sudo[280749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:05 compute-0 sudo[280749]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:05 compute-0 sudo[280774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:48:05 compute-0 sudo[280774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:05 compute-0 sudo[280774]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:48:05 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788869825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:48:05 compute-0 sudo[280799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:48:05 compute-0 sudo[280799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:05 compute-0 sudo[280799]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.678 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.684 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:48:05 compute-0 sudo[280826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:48:05 compute-0 sudo[280826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.736 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.737 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:48:05 compute-0 nova_compute[260022]: 2025-10-01 13:48:05.738 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:48:06 compute-0 ceph-mon[74802]: pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:06 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1788869825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:48:06 compute-0 sudo[280826]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:48:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:48:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:48:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:48:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:48:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:48:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 538c297f-918d-4018-a1fe-105fa1f0dea3 does not exist
Oct 01 13:48:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8bd8dcbf-17e8-4961-b308-143b5b67dc26 does not exist
Oct 01 13:48:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 21917a58-3955-4263-ae13-03dcad7d966a does not exist
Oct 01 13:48:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:48:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:48:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:48:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:48:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:48:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:48:06 compute-0 sudo[280882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:48:06 compute-0 sudo[280882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:06 compute-0 sudo[280882]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:06 compute-0 sudo[280907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:48:06 compute-0 sudo[280907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:06 compute-0 sudo[280907]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:06 compute-0 sudo[280932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:48:06 compute-0 sudo[280932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:06 compute-0 sudo[280932]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:06 compute-0 sudo[280957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:48:06 compute-0 sudo[280957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:06 compute-0 podman[281023]: 2025-10-01 13:48:06.91814909 +0000 UTC m=+0.049299497 container create 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:48:06 compute-0 systemd[1]: Started libpod-conmon-039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1.scope.
Oct 01 13:48:06 compute-0 podman[281023]: 2025-10-01 13:48:06.896104339 +0000 UTC m=+0.027254836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:48:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:48:07 compute-0 podman[281023]: 2025-10-01 13:48:07.035183998 +0000 UTC m=+0.166334495 container init 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:48:07 compute-0 podman[281023]: 2025-10-01 13:48:07.047209361 +0000 UTC m=+0.178359808 container start 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:48:07 compute-0 podman[281023]: 2025-10-01 13:48:07.051455535 +0000 UTC m=+0.182605952 container attach 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:48:07 compute-0 upbeat_mcnulty[281040]: 167 167
Oct 01 13:48:07 compute-0 systemd[1]: libpod-039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1.scope: Deactivated successfully.
Oct 01 13:48:07 compute-0 conmon[281040]: conmon 039739ad2e21c39cbe67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1.scope/container/memory.events
Oct 01 13:48:07 compute-0 podman[281023]: 2025-10-01 13:48:07.057773176 +0000 UTC m=+0.188923593 container died 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:48:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:48:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:48:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:48:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:48:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:48:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-76664d89fa0670707b437d45ef5cc95128390940574cabdd27338878053d40e5-merged.mount: Deactivated successfully.
Oct 01 13:48:07 compute-0 podman[281023]: 2025-10-01 13:48:07.11705218 +0000 UTC m=+0.248202627 container remove 039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:48:07 compute-0 systemd[1]: libpod-conmon-039739ad2e21c39cbe6793993b2f86b386aa455f071cd8c7a6165aeeaea99ec1.scope: Deactivated successfully.
Oct 01 13:48:07 compute-0 podman[281064]: 2025-10-01 13:48:07.354959699 +0000 UTC m=+0.074654932 container create 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:48:07 compute-0 systemd[1]: Started libpod-conmon-694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1.scope.
Oct 01 13:48:07 compute-0 podman[281064]: 2025-10-01 13:48:07.323605393 +0000 UTC m=+0.043300726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:48:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:48:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:07 compute-0 podman[281064]: 2025-10-01 13:48:07.471970787 +0000 UTC m=+0.191666040 container init 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 13:48:07 compute-0 podman[281064]: 2025-10-01 13:48:07.486551031 +0000 UTC m=+0.206246304 container start 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:48:07 compute-0 podman[281064]: 2025-10-01 13:48:07.491000632 +0000 UTC m=+0.210695915 container attach 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:48:08 compute-0 ceph-mon[74802]: pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:08 compute-0 objective_clarke[281081]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:48:08 compute-0 objective_clarke[281081]: --> relative data size: 1.0
Oct 01 13:48:08 compute-0 objective_clarke[281081]: --> All data devices are unavailable
Oct 01 13:48:08 compute-0 systemd[1]: libpod-694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1.scope: Deactivated successfully.
Oct 01 13:48:08 compute-0 systemd[1]: libpod-694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1.scope: Consumed 1.110s CPU time.
Oct 01 13:48:08 compute-0 podman[281064]: 2025-10-01 13:48:08.649963708 +0000 UTC m=+1.369658951 container died 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 13:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9b822c4c9a4eee0c7262ea5a312050075b5b7c698a10df5c1fb5e908a157d10-merged.mount: Deactivated successfully.
Oct 01 13:48:08 compute-0 podman[281064]: 2025-10-01 13:48:08.706941088 +0000 UTC m=+1.426636341 container remove 694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_clarke, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:48:08 compute-0 systemd[1]: libpod-conmon-694a5613941080a5ef1abff8ebbe89d41dc48b240b0994cf1f8313ef30c18fe1.scope: Deactivated successfully.
Oct 01 13:48:08 compute-0 sudo[280957]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:08 compute-0 sudo[281123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:48:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:08 compute-0 sudo[281123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:08 compute-0 sudo[281123]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:08 compute-0 sudo[281148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:48:08 compute-0 sudo[281148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:08 compute-0 sudo[281148]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:09 compute-0 sudo[281173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:48:09 compute-0 sudo[281173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:09 compute-0 sudo[281173]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:09 compute-0 sudo[281198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:48:09 compute-0 sudo[281198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:09 compute-0 podman[281263]: 2025-10-01 13:48:09.538086878 +0000 UTC m=+0.068409154 container create f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct 01 13:48:09 compute-0 systemd[1]: Started libpod-conmon-f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221.scope.
Oct 01 13:48:09 compute-0 podman[281263]: 2025-10-01 13:48:09.508372684 +0000 UTC m=+0.038695010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:48:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:48:09 compute-0 podman[281263]: 2025-10-01 13:48:09.641775013 +0000 UTC m=+0.172097319 container init f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:48:09 compute-0 podman[281263]: 2025-10-01 13:48:09.654879099 +0000 UTC m=+0.185201335 container start f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 13:48:09 compute-0 podman[281263]: 2025-10-01 13:48:09.658405751 +0000 UTC m=+0.188728077 container attach f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:48:09 compute-0 systemd[1]: libpod-f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221.scope: Deactivated successfully.
Oct 01 13:48:09 compute-0 brave_moore[281277]: 167 167
Oct 01 13:48:09 compute-0 conmon[281277]: conmon f5e54a5a25554fed6db8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221.scope/container/memory.events
Oct 01 13:48:09 compute-0 podman[281263]: 2025-10-01 13:48:09.665232288 +0000 UTC m=+0.195554524 container died f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:48:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-cacd3208b32d25f56745411fd4412298766162860188cf1040112937c82e8a9f-merged.mount: Deactivated successfully.
Oct 01 13:48:09 compute-0 podman[281263]: 2025-10-01 13:48:09.702038248 +0000 UTC m=+0.232360484 container remove f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moore, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:48:09 compute-0 systemd[1]: libpod-conmon-f5e54a5a25554fed6db892617cdd77116d6c332c876e757a549188dc6bc13221.scope: Deactivated successfully.
Oct 01 13:48:09 compute-0 nova_compute[260022]: 2025-10-01 13:48:09.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:09 compute-0 nova_compute[260022]: 2025-10-01 13:48:09.735 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:09 compute-0 nova_compute[260022]: 2025-10-01 13:48:09.736 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:48:09 compute-0 podman[281305]: 2025-10-01 13:48:09.900787083 +0000 UTC m=+0.053829361 container create aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:48:09 compute-0 systemd[1]: Started libpod-conmon-aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053.scope.
Oct 01 13:48:09 compute-0 podman[281305]: 2025-10-01 13:48:09.874514278 +0000 UTC m=+0.027556596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:48:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:10 compute-0 podman[281305]: 2025-10-01 13:48:10.007386371 +0000 UTC m=+0.160428699 container init aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:48:10 compute-0 podman[281305]: 2025-10-01 13:48:10.018570626 +0000 UTC m=+0.171612904 container start aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:48:10 compute-0 podman[281305]: 2025-10-01 13:48:10.022787689 +0000 UTC m=+0.175829977 container attach aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:48:10 compute-0 ceph-mon[74802]: pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]: {
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:     "0": [
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:         {
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "devices": [
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "/dev/loop3"
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             ],
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_name": "ceph_lv0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_size": "21470642176",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "name": "ceph_lv0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "tags": {
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.cluster_name": "ceph",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.crush_device_class": "",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.encrypted": "0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.osd_id": "0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.type": "block",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.vdo": "0"
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             },
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "type": "block",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "vg_name": "ceph_vg0"
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:         }
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:     ],
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:     "1": [
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:         {
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "devices": [
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "/dev/loop4"
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             ],
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_name": "ceph_lv1",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_size": "21470642176",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "name": "ceph_lv1",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "tags": {
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.cluster_name": "ceph",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.crush_device_class": "",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.encrypted": "0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.osd_id": "1",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.type": "block",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.vdo": "0"
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             },
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "type": "block",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "vg_name": "ceph_vg1"
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:         }
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:     ],
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:     "2": [
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:         {
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "devices": [
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "/dev/loop5"
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             ],
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_name": "ceph_lv2",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_size": "21470642176",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "name": "ceph_lv2",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "tags": {
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.cluster_name": "ceph",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.crush_device_class": "",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.encrypted": "0",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.osd_id": "2",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.type": "block",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:                 "ceph.vdo": "0"
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             },
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "type": "block",
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:             "vg_name": "ceph_vg2"
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:         }
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]:     ]
Oct 01 13:48:10 compute-0 youthful_torvalds[281320]: }
Oct 01 13:48:10 compute-0 systemd[1]: libpod-aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053.scope: Deactivated successfully.
Oct 01 13:48:10 compute-0 podman[281305]: 2025-10-01 13:48:10.898813055 +0000 UTC m=+1.051855333 container died aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:48:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-12fc9c13f38483578e1787b60be115611db11961bc79c6649929780bf813b41f-merged.mount: Deactivated successfully.
Oct 01 13:48:10 compute-0 podman[281305]: 2025-10-01 13:48:10.973025294 +0000 UTC m=+1.126067542 container remove aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 13:48:10 compute-0 systemd[1]: libpod-conmon-aa2f83019882c1a9a07f8923f926f189f548c732c04f0d4bdb6c738a4ed57053.scope: Deactivated successfully.
Oct 01 13:48:11 compute-0 sudo[281198]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:11 compute-0 sudo[281346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:48:11 compute-0 sudo[281346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:11 compute-0 sudo[281346]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:11 compute-0 sudo[281371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:48:11 compute-0 sudo[281371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:11 compute-0 sudo[281371]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:11 compute-0 sshd-session[281325]: Invalid user Administrator from 185.156.73.233 port 45894
Oct 01 13:48:11 compute-0 sudo[281396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:48:11 compute-0 sudo[281396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:11 compute-0 sudo[281396]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:11 compute-0 nova_compute[260022]: 2025-10-01 13:48:11.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:11 compute-0 nova_compute[260022]: 2025-10-01 13:48:11.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:48:11 compute-0 nova_compute[260022]: 2025-10-01 13:48:11.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:48:11 compute-0 nova_compute[260022]: 2025-10-01 13:48:11.365 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:48:11 compute-0 nova_compute[260022]: 2025-10-01 13:48:11.365 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:11 compute-0 sudo[281421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:48:11 compute-0 sudo[281421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:11 compute-0 sshd-session[281325]: Failed none for invalid user Administrator from 185.156.73.233 port 45894 ssh2
Oct 01 13:48:11 compute-0 sshd-session[281325]: Connection closed by invalid user Administrator 185.156.73.233 port 45894 [preauth]
Oct 01 13:48:11 compute-0 podman[281484]: 2025-10-01 13:48:11.755272609 +0000 UTC m=+0.044637259 container create d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:48:11 compute-0 systemd[1]: Started libpod-conmon-d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f.scope.
Oct 01 13:48:11 compute-0 podman[281484]: 2025-10-01 13:48:11.736635697 +0000 UTC m=+0.026000357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:48:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:48:11 compute-0 podman[281484]: 2025-10-01 13:48:11.85381419 +0000 UTC m=+0.143178870 container init d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:48:11 compute-0 podman[281484]: 2025-10-01 13:48:11.864723597 +0000 UTC m=+0.154088237 container start d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:48:11 compute-0 podman[281484]: 2025-10-01 13:48:11.868844958 +0000 UTC m=+0.158209618 container attach d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 13:48:11 compute-0 kind_hertz[281500]: 167 167
Oct 01 13:48:11 compute-0 systemd[1]: libpod-d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f.scope: Deactivated successfully.
Oct 01 13:48:11 compute-0 podman[281484]: 2025-10-01 13:48:11.873584149 +0000 UTC m=+0.162948789 container died d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 13:48:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3aa24f3b760d9da1e44bec6559f8f198bd2df160d24aa81fb0845d35dbe3aaa-merged.mount: Deactivated successfully.
Oct 01 13:48:11 compute-0 podman[281484]: 2025-10-01 13:48:11.929695122 +0000 UTC m=+0.219059742 container remove d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hertz, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:48:11 compute-0 systemd[1]: libpod-conmon-d222ff1ee8543e4742041b44447c87de7e857296b3993a533452bac7addf311f.scope: Deactivated successfully.
Oct 01 13:48:12 compute-0 ceph-mon[74802]: pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:12 compute-0 podman[281524]: 2025-10-01 13:48:12.16885102 +0000 UTC m=+0.048136470 container create e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:48:12 compute-0 systemd[1]: Started libpod-conmon-e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29.scope.
Oct 01 13:48:12 compute-0 podman[281524]: 2025-10-01 13:48:12.147377798 +0000 UTC m=+0.026663228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:48:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:48:12 compute-0 podman[281524]: 2025-10-01 13:48:12.270766879 +0000 UTC m=+0.150052369 container init e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:48:12 compute-0 podman[281524]: 2025-10-01 13:48:12.283009058 +0000 UTC m=+0.162294458 container start e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:48:12 compute-0 podman[281524]: 2025-10-01 13:48:12.287960715 +0000 UTC m=+0.167246165 container attach e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:48:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:48:12.316 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:48:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:48:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:48:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:48:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:48:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:13 compute-0 recursing_swanson[281541]: {
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "osd_id": 0,
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "type": "bluestore"
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:     },
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "osd_id": 2,
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "type": "bluestore"
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:     },
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "osd_id": 1,
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:         "type": "bluestore"
Oct 01 13:48:13 compute-0 recursing_swanson[281541]:     }
Oct 01 13:48:13 compute-0 recursing_swanson[281541]: }
Oct 01 13:48:13 compute-0 systemd[1]: libpod-e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29.scope: Deactivated successfully.
Oct 01 13:48:13 compute-0 podman[281524]: 2025-10-01 13:48:13.413814919 +0000 UTC m=+1.293100339 container died e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:48:13 compute-0 systemd[1]: libpod-e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29.scope: Consumed 1.134s CPU time.
Oct 01 13:48:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb4b5d29ada6c7e6f7cb665ce697a3f1a25e7333e36858b04144a3ced7a983e9-merged.mount: Deactivated successfully.
Oct 01 13:48:13 compute-0 podman[281524]: 2025-10-01 13:48:13.48401165 +0000 UTC m=+1.363297070 container remove e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:48:13 compute-0 systemd[1]: libpod-conmon-e8f7f3c2626baa88921047cc968c75bd9dd91ca4f6c7fbb0109944967d169b29.scope: Deactivated successfully.
Oct 01 13:48:13 compute-0 sudo[281421]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:48:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:48:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:48:13 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:48:13 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e55e401f-fd6c-4af4-acd6-5f6d6d2cae2d does not exist
Oct 01 13:48:13 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c4735f57-6298-47a9-878f-1eacf6fe1d8c does not exist
Oct 01 13:48:13 compute-0 sudo[281588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:48:13 compute-0 sudo[281588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:13 compute-0 sudo[281588]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:13 compute-0 sudo[281613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:48:13 compute-0 sudo[281613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:48:13 compute-0 sudo[281613]: pam_unix(sudo:session): session closed for user root
Oct 01 13:48:14 compute-0 ceph-mon[74802]: pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:14 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:48:14 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:48:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:15 compute-0 nova_compute[260022]: 2025-10-01 13:48:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:15 compute-0 nova_compute[260022]: 2025-10-01 13:48:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:16 compute-0 ceph-mon[74802]: pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:48:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:48:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:48:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:48:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:48:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:48:18 compute-0 ceph-mon[74802]: pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:18 compute-0 nova_compute[260022]: 2025-10-01 13:48:18.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:20 compute-0 ceph-mon[74802]: pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:22 compute-0 ceph-mon[74802]: pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:22 compute-0 nova_compute[260022]: 2025-10-01 13:48:22.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:48:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:24 compute-0 ceph-mon[74802]: pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:26 compute-0 ceph-mon[74802]: pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:28 compute-0 ceph-mon[74802]: pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:29 compute-0 podman[281640]: 2025-10-01 13:48:29.557108608 +0000 UTC m=+0.089960409 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:48:29 compute-0 podman[281639]: 2025-10-01 13:48:29.558999408 +0000 UTC m=+0.098591154 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:48:29 compute-0 podman[281641]: 2025-10-01 13:48:29.589791967 +0000 UTC m=+0.116677219 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 13:48:29 compute-0 podman[281638]: 2025-10-01 13:48:29.613003444 +0000 UTC m=+0.154651545 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 01 13:48:30 compute-0 ceph-mon[74802]: pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:32 compute-0 ceph-mon[74802]: pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:32 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:48:32.887 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:48:32 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:48:32.888 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:48:34 compute-0 ceph-mon[74802]: pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:36 compute-0 ceph-mon[74802]: pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:38 compute-0 ceph-mon[74802]: pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:39 compute-0 ceph-mon[74802]: pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:41 compute-0 ceph-mon[74802]: pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:42 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:48:42.890 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:48:43 compute-0 ceph-mon[74802]: pgmap v1348: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:45 compute-0 ceph-mon[74802]: pgmap v1349: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:48:47
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Oct 01 13:48:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:48:47 compute-0 ceph-mon[74802]: pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:48:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:49 compute-0 ceph-mon[74802]: pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:51 compute-0 ceph-mon[74802]: pgmap v1352: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:54 compute-0 ceph-mon[74802]: pgmap v1353: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:48:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922701101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:48:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:48:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3922701101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:48:56 compute-0 ceph-mon[74802]: pgmap v1354: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3922701101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:48:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3922701101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:48:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:48:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:48:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:48:58 compute-0 ceph-mon[74802]: pgmap v1355: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:48:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:00 compute-0 ceph-mon[74802]: pgmap v1356: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:00 compute-0 podman[281731]: 2025-10-01 13:49:00.533557125 +0000 UTC m=+0.059201672 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct 01 13:49:00 compute-0 podman[281719]: 2025-10-01 13:49:00.539654059 +0000 UTC m=+0.085135967 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 13:49:00 compute-0 podman[281725]: 2025-10-01 13:49:00.540225006 +0000 UTC m=+0.075091487 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:49:00 compute-0 podman[281718]: 2025-10-01 13:49:00.591587459 +0000 UTC m=+0.143565734 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS)
Oct 01 13:49:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:02 compute-0 ceph-mon[74802]: pgmap v1357: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:04 compute-0 ceph-mon[74802]: pgmap v1358: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:04 compute-0 nova_compute[260022]: 2025-10-01 13:49:04.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:49:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:06 compute-0 ceph-mon[74802]: pgmap v1359: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:06 compute-0 nova_compute[260022]: 2025-10-01 13:49:06.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:49:06 compute-0 nova_compute[260022]: 2025-10-01 13:49:06.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:49:06 compute-0 nova_compute[260022]: 2025-10-01 13:49:06.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:49:06 compute-0 nova_compute[260022]: 2025-10-01 13:49:06.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:49:06 compute-0 nova_compute[260022]: 2025-10-01 13:49:06.378 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:49:06 compute-0 nova_compute[260022]: 2025-10-01 13:49:06.379 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:49:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:49:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231492092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:49:06 compute-0 nova_compute[260022]: 2025-10-01 13:49:06.832 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:49:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.023 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.024 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.024 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.024 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:49:07 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2231492092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.200 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.201 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.201 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.253 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:49:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:49:07 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2426256805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.740 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.748 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.829 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.831 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:49:07 compute-0 nova_compute[260022]: 2025-10-01 13:49:07.831 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:49:08 compute-0 ceph-mon[74802]: pgmap v1360: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:08 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2426256805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:49:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:10 compute-0 ceph-mon[74802]: pgmap v1361: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:10 compute-0 nova_compute[260022]: 2025-10-01 13:49:10.827 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:49:10 compute-0 nova_compute[260022]: 2025-10-01 13:49:10.827 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:49:10 compute-0 nova_compute[260022]: 2025-10-01 13:49:10.828 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:49:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:12 compute-0 ceph-mon[74802]: pgmap v1362: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:49:12.317 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:49:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:49:12.317 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:49:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:49:12.317 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:49:12 compute-0 nova_compute[260022]: 2025-10-01 13:49:12.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:49:12 compute-0 nova_compute[260022]: 2025-10-01 13:49:12.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:49:12 compute-0 nova_compute[260022]: 2025-10-01 13:49:12.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:49:12 compute-0 nova_compute[260022]: 2025-10-01 13:49:12.363 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:49:12 compute-0 nova_compute[260022]: 2025-10-01 13:49:12.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:49:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:13 compute-0 sudo[281837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:49:13 compute-0 sudo[281837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:13 compute-0 sudo[281837]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:13 compute-0 sudo[281862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:49:13 compute-0 sudo[281862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:13 compute-0 sudo[281862]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:13 compute-0 sudo[281887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:49:13 compute-0 sudo[281887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:13 compute-0 sudo[281887]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:14 compute-0 sudo[281912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:49:14 compute-0 sudo[281912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:14 compute-0 ceph-mon[74802]: pgmap v1363: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:14 compute-0 sudo[281912]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:49:14 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:49:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:49:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:49:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:49:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:49:14 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev d41a8666-32d5-4677-ad76-7c691637dd21 does not exist
Oct 01 13:49:14 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 66a27645-a620-49f2-a271-05994af9096e does not exist
Oct 01 13:49:14 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ae33c732-8835-4d24-8ce0-b60b40facfba does not exist
Oct 01 13:49:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:49:14 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:49:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:49:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:49:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:49:14 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:49:14 compute-0 sudo[281970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:49:14 compute-0 sudo[281970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:14 compute-0 sudo[281970]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:14 compute-0 sudo[281995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:49:14 compute-0 sudo[281995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:14 compute-0 sudo[281995]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:14 compute-0 sudo[282020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:49:14 compute-0 sudo[282020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:14 compute-0 sudo[282020]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:15 compute-0 sudo[282045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:49:15 compute-0 sudo[282045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:49:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:49:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:49:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:49:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:49:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:49:15 compute-0 nova_compute[260022]: 2025-10-01 13:49:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:49:15 compute-0 podman[282110]: 2025-10-01 13:49:15.418336657 +0000 UTC m=+0.023366724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:49:15 compute-0 podman[282110]: 2025-10-01 13:49:15.546320704 +0000 UTC m=+0.151350751 container create 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:49:15 compute-0 systemd[1]: Started libpod-conmon-1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873.scope.
Oct 01 13:49:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:49:15 compute-0 podman[282110]: 2025-10-01 13:49:15.775079192 +0000 UTC m=+0.380109249 container init 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:49:15 compute-0 podman[282110]: 2025-10-01 13:49:15.790573835 +0000 UTC m=+0.395603892 container start 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:49:15 compute-0 bold_bose[282126]: 167 167
Oct 01 13:49:15 compute-0 systemd[1]: libpod-1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873.scope: Deactivated successfully.
Oct 01 13:49:15 compute-0 podman[282110]: 2025-10-01 13:49:15.911352943 +0000 UTC m=+0.516383050 container attach 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:49:15 compute-0 podman[282110]: 2025-10-01 13:49:15.913687677 +0000 UTC m=+0.518717734 container died 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:49:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-04bb906769aaeb94c5ff42988d1286618a846bceb1e627db4a2551ee38aa59d9-merged.mount: Deactivated successfully.
Oct 01 13:49:16 compute-0 ceph-mon[74802]: pgmap v1364: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:16 compute-0 podman[282110]: 2025-10-01 13:49:16.330112339 +0000 UTC m=+0.935142376 container remove 1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bose, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:49:16 compute-0 nova_compute[260022]: 2025-10-01 13:49:16.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:49:16 compute-0 systemd[1]: libpod-conmon-1d88e586997392dbde2019ca1445246d4cf73ff93fd566db60f62e4aca9ce873.scope: Deactivated successfully.
Oct 01 13:49:16 compute-0 podman[282149]: 2025-10-01 13:49:16.561842922 +0000 UTC m=+0.031911505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:49:16 compute-0 podman[282149]: 2025-10-01 13:49:16.684481389 +0000 UTC m=+0.154549922 container create 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:49:16 compute-0 systemd[1]: Started libpod-conmon-6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8.scope.
Oct 01 13:49:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:16 compute-0 podman[282149]: 2025-10-01 13:49:16.946899727 +0000 UTC m=+0.416968280 container init 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:49:16 compute-0 podman[282149]: 2025-10-01 13:49:16.954684855 +0000 UTC m=+0.424753358 container start 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:49:17 compute-0 podman[282149]: 2025-10-01 13:49:17.079024276 +0000 UTC m=+0.549092779 container attach 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:49:17 compute-0 ceph-mon[74802]: pgmap v1365: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:49:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:49:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:49:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:49:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:49:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:49:18 compute-0 jolly_beaver[282166]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:49:18 compute-0 jolly_beaver[282166]: --> relative data size: 1.0
Oct 01 13:49:18 compute-0 jolly_beaver[282166]: --> All data devices are unavailable
Oct 01 13:49:18 compute-0 systemd[1]: libpod-6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8.scope: Deactivated successfully.
Oct 01 13:49:18 compute-0 systemd[1]: libpod-6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8.scope: Consumed 1.159s CPU time.
Oct 01 13:49:18 compute-0 podman[282149]: 2025-10-01 13:49:18.161477951 +0000 UTC m=+1.631546554 container died 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 13:49:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e0f15632898112061c7e6a7fff253c152def49e5b521b0d4fb21a1f6c71a827-merged.mount: Deactivated successfully.
Oct 01 13:49:18 compute-0 podman[282149]: 2025-10-01 13:49:18.23983532 +0000 UTC m=+1.709903843 container remove 6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:49:18 compute-0 systemd[1]: libpod-conmon-6ad6b693025646469263f1dc687fab979f5e5f2d1e3662bde1d34914944985d8.scope: Deactivated successfully.
Oct 01 13:49:18 compute-0 sudo[282045]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:18 compute-0 nova_compute[260022]: 2025-10-01 13:49:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:49:18 compute-0 sudo[282207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:49:18 compute-0 sudo[282207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:18 compute-0 sudo[282207]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:18 compute-0 sudo[282232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:49:18 compute-0 sudo[282232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:18 compute-0 sudo[282232]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:18 compute-0 sudo[282257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:49:18 compute-0 sudo[282257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:18 compute-0 sudo[282257]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:18 compute-0 sudo[282282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:49:18 compute-0 sudo[282282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:19 compute-0 podman[282348]: 2025-10-01 13:49:19.106834269 +0000 UTC m=+0.075329915 container create 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:49:19 compute-0 systemd[1]: Started libpod-conmon-96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd.scope.
Oct 01 13:49:19 compute-0 podman[282348]: 2025-10-01 13:49:19.07977945 +0000 UTC m=+0.048275206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:49:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:49:19 compute-0 podman[282348]: 2025-10-01 13:49:19.211109272 +0000 UTC m=+0.179604938 container init 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:49:19 compute-0 podman[282348]: 2025-10-01 13:49:19.223450045 +0000 UTC m=+0.191945691 container start 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:49:19 compute-0 podman[282348]: 2025-10-01 13:49:19.227590796 +0000 UTC m=+0.196086462 container attach 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:49:19 compute-0 compassionate_lumiere[282365]: 167 167
Oct 01 13:49:19 compute-0 systemd[1]: libpod-96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd.scope: Deactivated successfully.
Oct 01 13:49:19 compute-0 podman[282348]: 2025-10-01 13:49:19.231657236 +0000 UTC m=+0.200152882 container died 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:49:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-85718ed1a2e80edc2795dfa59dc557f36fc3c38c152fdc5544d6f051fb91e22b-merged.mount: Deactivated successfully.
Oct 01 13:49:19 compute-0 podman[282348]: 2025-10-01 13:49:19.277982037 +0000 UTC m=+0.246477683 container remove 96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:49:19 compute-0 systemd[1]: libpod-conmon-96974b0884675a50192a0881fe0523e2a13854ccf9d69fd00a0484b170bceafd.scope: Deactivated successfully.
Oct 01 13:49:19 compute-0 podman[282388]: 2025-10-01 13:49:19.518686435 +0000 UTC m=+0.056152115 container create 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 13:49:19 compute-0 systemd[1]: Started libpod-conmon-44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c.scope.
Oct 01 13:49:19 compute-0 podman[282388]: 2025-10-01 13:49:19.487897227 +0000 UTC m=+0.025362987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:49:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:19 compute-0 podman[282388]: 2025-10-01 13:49:19.627622107 +0000 UTC m=+0.165087827 container init 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 13:49:19 compute-0 podman[282388]: 2025-10-01 13:49:19.637291335 +0000 UTC m=+0.174757015 container start 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:49:19 compute-0 podman[282388]: 2025-10-01 13:49:19.640845578 +0000 UTC m=+0.178311288 container attach 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:49:19 compute-0 ceph-mon[74802]: pgmap v1366: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:20 compute-0 fervent_ride[282404]: {
Oct 01 13:49:20 compute-0 fervent_ride[282404]:     "0": [
Oct 01 13:49:20 compute-0 fervent_ride[282404]:         {
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "devices": [
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "/dev/loop3"
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             ],
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_name": "ceph_lv0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_size": "21470642176",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "name": "ceph_lv0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "tags": {
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.cluster_name": "ceph",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.crush_device_class": "",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.encrypted": "0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.osd_id": "0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.type": "block",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.vdo": "0"
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             },
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "type": "block",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "vg_name": "ceph_vg0"
Oct 01 13:49:20 compute-0 fervent_ride[282404]:         }
Oct 01 13:49:20 compute-0 fervent_ride[282404]:     ],
Oct 01 13:49:20 compute-0 fervent_ride[282404]:     "1": [
Oct 01 13:49:20 compute-0 fervent_ride[282404]:         {
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "devices": [
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "/dev/loop4"
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             ],
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_name": "ceph_lv1",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_size": "21470642176",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "name": "ceph_lv1",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "tags": {
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.cluster_name": "ceph",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.crush_device_class": "",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.encrypted": "0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.osd_id": "1",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.type": "block",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.vdo": "0"
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             },
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "type": "block",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "vg_name": "ceph_vg1"
Oct 01 13:49:20 compute-0 fervent_ride[282404]:         }
Oct 01 13:49:20 compute-0 fervent_ride[282404]:     ],
Oct 01 13:49:20 compute-0 fervent_ride[282404]:     "2": [
Oct 01 13:49:20 compute-0 fervent_ride[282404]:         {
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "devices": [
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "/dev/loop5"
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             ],
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_name": "ceph_lv2",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_size": "21470642176",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "name": "ceph_lv2",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "tags": {
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.cluster_name": "ceph",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.crush_device_class": "",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.encrypted": "0",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.osd_id": "2",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.type": "block",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:                 "ceph.vdo": "0"
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             },
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "type": "block",
Oct 01 13:49:20 compute-0 fervent_ride[282404]:             "vg_name": "ceph_vg2"
Oct 01 13:49:20 compute-0 fervent_ride[282404]:         }
Oct 01 13:49:20 compute-0 fervent_ride[282404]:     ]
Oct 01 13:49:20 compute-0 fervent_ride[282404]: }
Oct 01 13:49:20 compute-0 systemd[1]: libpod-44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c.scope: Deactivated successfully.
Oct 01 13:49:20 compute-0 podman[282413]: 2025-10-01 13:49:20.489401691 +0000 UTC m=+0.033275920 container died 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:49:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-13f3e37eba6bce987fad764923a9cd8f36c52613efff872ace8972a1244bbe1e-merged.mount: Deactivated successfully.
Oct 01 13:49:20 compute-0 podman[282413]: 2025-10-01 13:49:20.566477889 +0000 UTC m=+0.110352088 container remove 44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 13:49:20 compute-0 systemd[1]: libpod-conmon-44b61346083039c0bec7d85aea0c9fb04d8b3ab84247dd324e8d084d5342650c.scope: Deactivated successfully.
Oct 01 13:49:20 compute-0 sudo[282282]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:20 compute-0 sudo[282428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:49:20 compute-0 sudo[282428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:20 compute-0 sudo[282428]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:20 compute-0 sudo[282453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:49:20 compute-0 sudo[282453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:20 compute-0 sudo[282453]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:20 compute-0 sudo[282478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:49:20 compute-0 sudo[282478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:20 compute-0 sudo[282478]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:20 compute-0 sudo[282503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:49:20 compute-0 sudo[282503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:21 compute-0 podman[282568]: 2025-10-01 13:49:21.349014904 +0000 UTC m=+0.056767424 container create 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:49:21 compute-0 systemd[1]: Started libpod-conmon-4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115.scope.
Oct 01 13:49:21 compute-0 podman[282568]: 2025-10-01 13:49:21.322110349 +0000 UTC m=+0.029862869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:49:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:49:21 compute-0 podman[282568]: 2025-10-01 13:49:21.549405602 +0000 UTC m=+0.257158162 container init 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 13:49:21 compute-0 podman[282568]: 2025-10-01 13:49:21.562332102 +0000 UTC m=+0.270084612 container start 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:49:21 compute-0 crazy_meitner[282584]: 167 167
Oct 01 13:49:21 compute-0 systemd[1]: libpod-4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115.scope: Deactivated successfully.
Oct 01 13:49:21 compute-0 podman[282568]: 2025-10-01 13:49:21.589874127 +0000 UTC m=+0.297626697 container attach 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:49:21 compute-0 podman[282568]: 2025-10-01 13:49:21.590784716 +0000 UTC m=+0.298537236 container died 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b82b89fc0274aea64c00217c93d69af7cc451d5eed8d204932129a3ea95aa3e8-merged.mount: Deactivated successfully.
Oct 01 13:49:21 compute-0 podman[282568]: 2025-10-01 13:49:21.752180004 +0000 UTC m=+0.459932524 container remove 4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 13:49:21 compute-0 systemd[1]: libpod-conmon-4f8875d804f08f1836f8bf866157c7f41c25c8f52deb91cef20d48926e73f115.scope: Deactivated successfully.
Oct 01 13:49:21 compute-0 ceph-mon[74802]: pgmap v1367: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:21 compute-0 podman[282608]: 2025-10-01 13:49:21.97705277 +0000 UTC m=+0.057359473 container create 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:49:22 compute-0 systemd[1]: Started libpod-conmon-25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25.scope.
Oct 01 13:49:22 compute-0 podman[282608]: 2025-10-01 13:49:21.958029956 +0000 UTC m=+0.038336649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:49:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:49:22 compute-0 podman[282608]: 2025-10-01 13:49:22.083820833 +0000 UTC m=+0.164127516 container init 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:49:22 compute-0 podman[282608]: 2025-10-01 13:49:22.092641823 +0000 UTC m=+0.172948546 container start 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:49:22 compute-0 podman[282608]: 2025-10-01 13:49:22.097251109 +0000 UTC m=+0.177557832 container attach 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:49:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:23 compute-0 confident_colden[282624]: {
Oct 01 13:49:23 compute-0 confident_colden[282624]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "osd_id": 0,
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "type": "bluestore"
Oct 01 13:49:23 compute-0 confident_colden[282624]:     },
Oct 01 13:49:23 compute-0 confident_colden[282624]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "osd_id": 2,
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "type": "bluestore"
Oct 01 13:49:23 compute-0 confident_colden[282624]:     },
Oct 01 13:49:23 compute-0 confident_colden[282624]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "osd_id": 1,
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:49:23 compute-0 confident_colden[282624]:         "type": "bluestore"
Oct 01 13:49:23 compute-0 confident_colden[282624]:     }
Oct 01 13:49:23 compute-0 confident_colden[282624]: }
Oct 01 13:49:23 compute-0 systemd[1]: libpod-25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25.scope: Deactivated successfully.
Oct 01 13:49:23 compute-0 podman[282608]: 2025-10-01 13:49:23.257430584 +0000 UTC m=+1.337737307 container died 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:49:23 compute-0 systemd[1]: libpod-25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25.scope: Consumed 1.172s CPU time.
Oct 01 13:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4381f337f8b95a770d28cbbb04d948e90ecb92d9ccd7fc8dd59ff91288ef41b8-merged.mount: Deactivated successfully.
Oct 01 13:49:23 compute-0 podman[282608]: 2025-10-01 13:49:23.336158936 +0000 UTC m=+1.416465629 container remove 25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:49:23 compute-0 systemd[1]: libpod-conmon-25f4f1a9a0b04c99f0ea50f7782f7b8305ae1d28180c6085b8594a2112413e25.scope: Deactivated successfully.
Oct 01 13:49:23 compute-0 sudo[282503]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:49:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:49:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:49:23 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:49:23 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ef1ddb8-05a5-462b-bdd6-669bdba31fbc does not exist
Oct 01 13:49:23 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev dc187cc4-47ac-408e-be61-2ebc9f9906eb does not exist
Oct 01 13:49:23 compute-0 sudo[282670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:49:23 compute-0 sudo[282670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:23 compute-0 sudo[282670]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:23 compute-0 sudo[282695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:49:23 compute-0 sudo[282695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:49:23 compute-0 sudo[282695]: pam_unix(sudo:session): session closed for user root
Oct 01 13:49:23 compute-0 ceph-mon[74802]: pgmap v1368: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:49:23 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:49:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:25 compute-0 ceph-mon[74802]: pgmap v1369: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:27 compute-0 ceph-mon[74802]: pgmap v1370: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:29 compute-0 ceph-mon[74802]: pgmap v1371: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:31 compute-0 podman[282721]: 2025-10-01 13:49:31.541605253 +0000 UTC m=+0.094632507 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:49:31 compute-0 podman[282723]: 2025-10-01 13:49:31.545492506 +0000 UTC m=+0.078545927 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:49:31 compute-0 podman[282722]: 2025-10-01 13:49:31.554138541 +0000 UTC m=+0.102232239 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 13:49:31 compute-0 podman[282720]: 2025-10-01 13:49:31.582688768 +0000 UTC m=+0.137118768 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 13:49:32 compute-0 ceph-mon[74802]: pgmap v1372: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.103411) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572103525, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2096, "num_deletes": 254, "total_data_size": 3460344, "memory_usage": 3530416, "flush_reason": "Manual Compaction"}
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572128256, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3392926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25731, "largest_seqno": 27826, "table_properties": {"data_size": 3383248, "index_size": 6172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19400, "raw_average_key_size": 20, "raw_value_size": 3364019, "raw_average_value_size": 3526, "num_data_blocks": 273, "num_entries": 954, "num_filter_entries": 954, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326350, "oldest_key_time": 1759326350, "file_creation_time": 1759326572, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 24911 microseconds, and 14358 cpu microseconds.
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.128336) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3392926 bytes OK
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.128369) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.130419) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.130447) EVENT_LOG_v1 {"time_micros": 1759326572130436, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.130479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3451538, prev total WAL file size 3451538, number of live WAL files 2.
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.132556) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3313KB)], [59(7576KB)]
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572132655, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11151033, "oldest_snapshot_seqno": -1}
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5130 keys, 9375956 bytes, temperature: kUnknown
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572194354, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9375956, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9339072, "index_size": 22950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 127375, "raw_average_key_size": 24, "raw_value_size": 9243830, "raw_average_value_size": 1801, "num_data_blocks": 948, "num_entries": 5130, "num_filter_entries": 5130, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326572, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.194761) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9375956 bytes
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.196685) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.4 rd, 151.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5652, records dropped: 522 output_compression: NoCompression
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.196717) EVENT_LOG_v1 {"time_micros": 1759326572196700, "job": 32, "event": "compaction_finished", "compaction_time_micros": 61802, "compaction_time_cpu_micros": 26835, "output_level": 6, "num_output_files": 1, "total_output_size": 9375956, "num_input_records": 5652, "num_output_records": 5130, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572198019, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326572201142, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.132395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:49:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:49:32.201303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:49:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:34 compute-0 ceph-mon[74802]: pgmap v1373: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:34 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:49:34.410 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:49:34 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:49:34.412 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:49:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:36 compute-0 ceph-mon[74802]: pgmap v1374: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:37 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:49:37.413 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:49:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:38 compute-0 ceph-mon[74802]: pgmap v1375: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:40 compute-0 ceph-mon[74802]: pgmap v1376: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:42 compute-0 ceph-mon[74802]: pgmap v1377: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:44 compute-0 ceph-mon[74802]: pgmap v1378: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:45 compute-0 ceph-mon[74802]: pgmap v1379: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:49:47
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data']
Oct 01 13:49:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:49:47 compute-0 ceph-mon[74802]: pgmap v1380: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:49:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:50 compute-0 ceph-mon[74802]: pgmap v1381: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:52 compute-0 ceph-mon[74802]: pgmap v1382: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:54 compute-0 ceph-mon[74802]: pgmap v1383: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:49:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3171431646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:49:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:49:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3171431646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:49:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3171431646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:49:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3171431646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:49:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:49:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6754 writes, 26K keys, 6754 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6754 writes, 1414 syncs, 4.78 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 707 writes, 1753 keys, 707 commit groups, 1.0 writes per commit group, ingest: 0.96 MB, 0.00 MB/s
                                           Interval WAL: 707 writes, 319 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 13:49:56 compute-0 ceph-mon[74802]: pgmap v1384: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:49:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:49:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:49:58 compute-0 ceph-mon[74802]: pgmap v1385: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:49:59 compute-0 ceph-mon[74802]: pgmap v1386: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:50:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 7951 writes, 30K keys, 7951 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7951 writes, 1749 syncs, 4.55 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 740 writes, 1899 keys, 740 commit groups, 1.0 writes per commit group, ingest: 1.08 MB, 0.00 MB/s
                                           Interval WAL: 740 writes, 319 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 13:50:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:01 compute-0 ceph-mon[74802]: pgmap v1387: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:02 compute-0 podman[282802]: 2025-10-01 13:50:02.549160088 +0000 UTC m=+0.078241237 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 01 13:50:02 compute-0 podman[282801]: 2025-10-01 13:50:02.549645514 +0000 UTC m=+0.079119705 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:50:02 compute-0 podman[282800]: 2025-10-01 13:50:02.559071583 +0000 UTC m=+0.091797998 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:50:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:02 compute-0 podman[282799]: 2025-10-01 13:50:02.571514029 +0000 UTC m=+0.116720750 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Oct 01 13:50:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:03 compute-0 ceph-mon[74802]: pgmap v1388: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:04 compute-0 nova_compute[260022]: 2025-10-01 13:50:04.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:50:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6875 writes, 27K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6875 writes, 1441 syncs, 4.77 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 826 writes, 1986 keys, 826 commit groups, 1.0 writes per commit group, ingest: 1.10 MB, 0.00 MB/s
                                           Interval WAL: 826 writes, 369 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 13:50:05 compute-0 ceph-mon[74802]: pgmap v1389: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:07 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct 01 13:50:08 compute-0 ceph-mon[74802]: pgmap v1390: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.377 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.378 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:50:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:50:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3838049116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.780 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:50:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.951 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.953 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.953 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:50:08 compute-0 nova_compute[260022]: 2025-10-01 13:50:08.953 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:50:09 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3838049116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.059 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.060 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.060 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.184 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:50:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:50:09 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808567131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.623 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.632 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.649 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.652 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.652 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:50:09 compute-0 nova_compute[260022]: 2025-10-01 13:50:09.653 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:10 compute-0 ceph-mon[74802]: pgmap v1391: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:10 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1808567131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:50:10 compute-0 nova_compute[260022]: 2025-10-01 13:50:10.356 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:10 compute-0 nova_compute[260022]: 2025-10-01 13:50:10.356 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 13:50:10 compute-0 nova_compute[260022]: 2025-10-01 13:50:10.383 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 13:50:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:11 compute-0 nova_compute[260022]: 2025-10-01 13:50:11.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:11 compute-0 nova_compute[260022]: 2025-10-01 13:50:11.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:11 compute-0 nova_compute[260022]: 2025-10-01 13:50:11.368 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:50:12 compute-0 ceph-mon[74802]: pgmap v1392: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:50:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:50:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:50:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:50:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:50:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:50:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:13 compute-0 nova_compute[260022]: 2025-10-01 13:50:13.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:14 compute-0 ceph-mon[74802]: pgmap v1393: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:14 compute-0 nova_compute[260022]: 2025-10-01 13:50:14.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:14 compute-0 nova_compute[260022]: 2025-10-01 13:50:14.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:50:14 compute-0 nova_compute[260022]: 2025-10-01 13:50:14.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:50:14 compute-0 nova_compute[260022]: 2025-10-01 13:50:14.371 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:50:14 compute-0 nova_compute[260022]: 2025-10-01 13:50:14.372 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:14 compute-0 nova_compute[260022]: 2025-10-01 13:50:14.372 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 13:50:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:15 compute-0 nova_compute[260022]: 2025-10-01 13:50:15.357 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:16 compute-0 ceph-mon[74802]: pgmap v1394: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:50:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:50:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:50:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:50:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:50:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:50:18 compute-0 ceph-mon[74802]: pgmap v1395: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:18 compute-0 nova_compute[260022]: 2025-10-01 13:50:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:18 compute-0 nova_compute[260022]: 2025-10-01 13:50:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:20 compute-0 ceph-mon[74802]: pgmap v1396: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:22 compute-0 ceph-mon[74802]: pgmap v1397: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:23 compute-0 nova_compute[260022]: 2025-10-01 13:50:23.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:50:23 compute-0 sudo[282928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:50:23 compute-0 sudo[282928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:23 compute-0 sudo[282928]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:23 compute-0 sudo[282953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:50:23 compute-0 sudo[282953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:23 compute-0 sudo[282953]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:23 compute-0 sudo[282978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:50:23 compute-0 sudo[282978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:23 compute-0 sudo[282978]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:23 compute-0 sudo[283003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:50:23 compute-0 sudo[283003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:24 compute-0 ceph-mon[74802]: pgmap v1398: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:24 compute-0 sudo[283003]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:50:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:50:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:50:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:50:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:50:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:50:24 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f162a899-d0ee-470b-b465-1e41c6120560 does not exist
Oct 01 13:50:24 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 258f11c3-8488-4a8c-bdf0-8a921576cdc6 does not exist
Oct 01 13:50:24 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8552a23d-82c5-4160-b083-4f87e3e61b9b does not exist
Oct 01 13:50:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:50:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:50:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:50:24 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:50:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:50:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:50:24 compute-0 sudo[283059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:50:24 compute-0 sudo[283059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:24 compute-0 sudo[283059]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:24 compute-0 sudo[283084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:50:24 compute-0 sudo[283084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:24 compute-0 sudo[283084]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:24 compute-0 sudo[283109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:50:24 compute-0 sudo[283109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:24 compute-0 sudo[283109]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:24 compute-0 sudo[283134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:50:24 compute-0 sudo[283134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:50:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:50:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:50:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:50:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:50:25 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:50:25 compute-0 podman[283199]: 2025-10-01 13:50:25.285621558 +0000 UTC m=+0.056945420 container create c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:50:25 compute-0 systemd[1]: Started libpod-conmon-c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822.scope.
Oct 01 13:50:25 compute-0 podman[283199]: 2025-10-01 13:50:25.261027127 +0000 UTC m=+0.032350999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:50:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:50:25 compute-0 podman[283199]: 2025-10-01 13:50:25.410187387 +0000 UTC m=+0.181511329 container init c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:50:25 compute-0 podman[283199]: 2025-10-01 13:50:25.424125509 +0000 UTC m=+0.195449361 container start c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:50:25 compute-0 podman[283199]: 2025-10-01 13:50:25.428656973 +0000 UTC m=+0.199980915 container attach c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:50:25 compute-0 adoring_haibt[283215]: 167 167
Oct 01 13:50:25 compute-0 systemd[1]: libpod-c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822.scope: Deactivated successfully.
Oct 01 13:50:25 compute-0 conmon[283215]: conmon c18ebfa3a8970ee07b86 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822.scope/container/memory.events
Oct 01 13:50:25 compute-0 podman[283199]: 2025-10-01 13:50:25.434456347 +0000 UTC m=+0.205780229 container died c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:50:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-003656663cfda4701cb3d3accb726dc5d7ce18a363fe149aa05aa0701aa697c9-merged.mount: Deactivated successfully.
Oct 01 13:50:25 compute-0 podman[283199]: 2025-10-01 13:50:25.49308826 +0000 UTC m=+0.264412112 container remove c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:50:25 compute-0 systemd[1]: libpod-conmon-c18ebfa3a8970ee07b862a66146bff9d6cf338b1cb6574cb0a1da207e8e3c822.scope: Deactivated successfully.
Oct 01 13:50:25 compute-0 podman[283241]: 2025-10-01 13:50:25.707586777 +0000 UTC m=+0.061732413 container create 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:50:25 compute-0 systemd[1]: Started libpod-conmon-69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5.scope.
Oct 01 13:50:25 compute-0 podman[283241]: 2025-10-01 13:50:25.67433617 +0000 UTC m=+0.028481786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:50:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:25 compute-0 podman[283241]: 2025-10-01 13:50:25.810208857 +0000 UTC m=+0.164354543 container init 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 13:50:25 compute-0 podman[283241]: 2025-10-01 13:50:25.821401593 +0000 UTC m=+0.175547229 container start 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:50:25 compute-0 podman[283241]: 2025-10-01 13:50:25.825536405 +0000 UTC m=+0.179682011 container attach 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:50:26 compute-0 ceph-mon[74802]: pgmap v1399: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:27 compute-0 interesting_antonelli[283257]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:50:27 compute-0 interesting_antonelli[283257]: --> relative data size: 1.0
Oct 01 13:50:27 compute-0 interesting_antonelli[283257]: --> All data devices are unavailable
Oct 01 13:50:27 compute-0 systemd[1]: libpod-69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5.scope: Deactivated successfully.
Oct 01 13:50:27 compute-0 podman[283241]: 2025-10-01 13:50:27.134095083 +0000 UTC m=+1.488240689 container died 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:50:27 compute-0 systemd[1]: libpod-69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5.scope: Consumed 1.261s CPU time.
Oct 01 13:50:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dd9aa17fabe6f3d793f6cee4a9f68b04676ba393ea1678dc5f651aea487e899-merged.mount: Deactivated successfully.
Oct 01 13:50:27 compute-0 podman[283241]: 2025-10-01 13:50:27.199058248 +0000 UTC m=+1.553203844 container remove 69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_antonelli, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:50:27 compute-0 systemd[1]: libpod-conmon-69a08724595b3bd6d0aa17d2ea96208d3d4c1aabe91717d42eff63c5441cc9a5.scope: Deactivated successfully.
Oct 01 13:50:27 compute-0 sudo[283134]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:27 compute-0 sudo[283300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:50:27 compute-0 sudo[283300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:27 compute-0 sudo[283300]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:27 compute-0 sudo[283325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:50:27 compute-0 sudo[283325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:27 compute-0 sudo[283325]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:27 compute-0 sudo[283350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:50:27 compute-0 sudo[283350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:27 compute-0 sudo[283350]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:27 compute-0 sudo[283375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:50:27 compute-0 sudo[283375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:28 compute-0 podman[283440]: 2025-10-01 13:50:28.067990488 +0000 UTC m=+0.075468999 container create f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct 01 13:50:28 compute-0 systemd[1]: Started libpod-conmon-f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b.scope.
Oct 01 13:50:28 compute-0 podman[283440]: 2025-10-01 13:50:28.036516938 +0000 UTC m=+0.043995519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:50:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:50:28 compute-0 ceph-mon[74802]: pgmap v1400: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:28 compute-0 podman[283440]: 2025-10-01 13:50:28.179567034 +0000 UTC m=+0.187045605 container init f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:50:28 compute-0 podman[283440]: 2025-10-01 13:50:28.19238456 +0000 UTC m=+0.199863051 container start f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:50:28 compute-0 podman[283440]: 2025-10-01 13:50:28.197288416 +0000 UTC m=+0.204766987 container attach f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:50:28 compute-0 sleepy_ritchie[283456]: 167 167
Oct 01 13:50:28 compute-0 systemd[1]: libpod-f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b.scope: Deactivated successfully.
Oct 01 13:50:28 compute-0 podman[283440]: 2025-10-01 13:50:28.19958894 +0000 UTC m=+0.207067451 container died f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:50:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a6f332a001e5d5268215c68572b358cf2b1153550ac5dbe0cb31bdf29b06a74-merged.mount: Deactivated successfully.
Oct 01 13:50:28 compute-0 podman[283440]: 2025-10-01 13:50:28.253586425 +0000 UTC m=+0.261064916 container remove f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ritchie, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:50:28 compute-0 systemd[1]: libpod-conmon-f0ad3eff0e888e542e028a7f07bdf7883a8722065700af20193338ab33b1cb3b.scope: Deactivated successfully.
Oct 01 13:50:28 compute-0 podman[283480]: 2025-10-01 13:50:28.453915381 +0000 UTC m=+0.060121692 container create 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:50:28 compute-0 systemd[1]: Started libpod-conmon-35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb.scope.
Oct 01 13:50:28 compute-0 podman[283480]: 2025-10-01 13:50:28.431183359 +0000 UTC m=+0.037389710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:50:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:28 compute-0 podman[283480]: 2025-10-01 13:50:28.578576122 +0000 UTC m=+0.184782473 container init 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:50:28 compute-0 podman[283480]: 2025-10-01 13:50:28.595671915 +0000 UTC m=+0.201878216 container start 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:50:28 compute-0 podman[283480]: 2025-10-01 13:50:28.599763885 +0000 UTC m=+0.205970196 container attach 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 13:50:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]: {
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:     "0": [
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:         {
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "devices": [
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "/dev/loop3"
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             ],
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_name": "ceph_lv0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_size": "21470642176",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "name": "ceph_lv0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "tags": {
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.cluster_name": "ceph",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.crush_device_class": "",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.encrypted": "0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.osd_id": "0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.type": "block",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.vdo": "0"
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             },
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "type": "block",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "vg_name": "ceph_vg0"
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:         }
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:     ],
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:     "1": [
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:         {
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "devices": [
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "/dev/loop4"
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             ],
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_name": "ceph_lv1",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_size": "21470642176",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "name": "ceph_lv1",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "tags": {
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.cluster_name": "ceph",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.crush_device_class": "",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.encrypted": "0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.osd_id": "1",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.type": "block",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.vdo": "0"
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             },
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "type": "block",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "vg_name": "ceph_vg1"
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:         }
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:     ],
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:     "2": [
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:         {
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "devices": [
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "/dev/loop5"
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             ],
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_name": "ceph_lv2",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_size": "21470642176",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "name": "ceph_lv2",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "tags": {
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.cluster_name": "ceph",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.crush_device_class": "",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.encrypted": "0",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.osd_id": "2",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.type": "block",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:                 "ceph.vdo": "0"
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             },
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "type": "block",
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:             "vg_name": "ceph_vg2"
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:         }
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]:     ]
Oct 01 13:50:29 compute-0 dazzling_wilson[283496]: }
Oct 01 13:50:29 compute-0 systemd[1]: libpod-35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb.scope: Deactivated successfully.
Oct 01 13:50:29 compute-0 podman[283480]: 2025-10-01 13:50:29.430013866 +0000 UTC m=+1.036220147 container died 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-41a4445a15bfbb600f1e5c28788bb62696b4e814a12b3fda8352e3f6c650dfbd-merged.mount: Deactivated successfully.
Oct 01 13:50:29 compute-0 podman[283480]: 2025-10-01 13:50:29.503055917 +0000 UTC m=+1.109262228 container remove 35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:50:29 compute-0 systemd[1]: libpod-conmon-35b6bdd646ca9895898b60a837b6bcda3e0dfda9fcfdaadb355a92d43d076ddb.scope: Deactivated successfully.
Oct 01 13:50:29 compute-0 sudo[283375]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:29 compute-0 sudo[283518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:50:29 compute-0 sudo[283518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:29 compute-0 sudo[283518]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:29 compute-0 sudo[283543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:50:29 compute-0 sudo[283543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:29 compute-0 sudo[283543]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:29 compute-0 sudo[283568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:50:29 compute-0 sudo[283568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:29 compute-0 sudo[283568]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:29 compute-0 sudo[283593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:50:29 compute-0 sudo[283593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:30 compute-0 ceph-mon[74802]: pgmap v1401: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:30 compute-0 podman[283658]: 2025-10-01 13:50:30.391798537 +0000 UTC m=+0.046285912 container create eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:50:30 compute-0 systemd[1]: Started libpod-conmon-eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968.scope.
Oct 01 13:50:30 compute-0 podman[283658]: 2025-10-01 13:50:30.369842969 +0000 UTC m=+0.024330354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:50:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:50:30 compute-0 podman[283658]: 2025-10-01 13:50:30.486680792 +0000 UTC m=+0.141168237 container init eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:50:30 compute-0 podman[283658]: 2025-10-01 13:50:30.497592499 +0000 UTC m=+0.152079884 container start eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:50:30 compute-0 podman[283658]: 2025-10-01 13:50:30.501375988 +0000 UTC m=+0.155863433 container attach eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:50:30 compute-0 relaxed_wescoff[283674]: 167 167
Oct 01 13:50:30 compute-0 systemd[1]: libpod-eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968.scope: Deactivated successfully.
Oct 01 13:50:30 compute-0 podman[283658]: 2025-10-01 13:50:30.505064056 +0000 UTC m=+0.159551401 container died eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a652e50afd4554dae37605972719d1a159d635f93235325a50a6c305d6f3b26-merged.mount: Deactivated successfully.
Oct 01 13:50:30 compute-0 podman[283658]: 2025-10-01 13:50:30.545891634 +0000 UTC m=+0.200378979 container remove eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wescoff, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:50:30 compute-0 systemd[1]: libpod-conmon-eeb3d723f9ccec3a0ac10b7a3471c03568c05586fdd455dfacbf5b6a0a2ab968.scope: Deactivated successfully.
Oct 01 13:50:30 compute-0 podman[283698]: 2025-10-01 13:50:30.733454173 +0000 UTC m=+0.048250734 container create a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 13:50:30 compute-0 systemd[1]: Started libpod-conmon-a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c.scope.
Oct 01 13:50:30 compute-0 podman[283698]: 2025-10-01 13:50:30.711611899 +0000 UTC m=+0.026408450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:50:30 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:50:30 compute-0 podman[283698]: 2025-10-01 13:50:30.846250037 +0000 UTC m=+0.161046598 container init a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:50:30 compute-0 podman[283698]: 2025-10-01 13:50:30.860959875 +0000 UTC m=+0.175756406 container start a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 13:50:30 compute-0 podman[283698]: 2025-10-01 13:50:30.864238089 +0000 UTC m=+0.179034620 container attach a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:50:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]: {
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "osd_id": 0,
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "type": "bluestore"
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:     },
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "osd_id": 2,
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "type": "bluestore"
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:     },
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "osd_id": 1,
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:         "type": "bluestore"
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]:     }
Oct 01 13:50:32 compute-0 xenodochial_mahavira[283714]: }
Oct 01 13:50:32 compute-0 systemd[1]: libpod-a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c.scope: Deactivated successfully.
Oct 01 13:50:32 compute-0 podman[283698]: 2025-10-01 13:50:32.026842601 +0000 UTC m=+1.341639152 container died a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:50:32 compute-0 systemd[1]: libpod-a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c.scope: Consumed 1.176s CPU time.
Oct 01 13:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b0f87bff77e94c62c43757feccdf3ee1e80c1cc65c74eea38b356831f3acccc-merged.mount: Deactivated successfully.
Oct 01 13:50:32 compute-0 podman[283698]: 2025-10-01 13:50:32.105985145 +0000 UTC m=+1.420781706 container remove a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 13:50:32 compute-0 systemd[1]: libpod-conmon-a31737cab3085998f9f6eda470a60f13abbb6be8573bb80f83efb90b45ab838c.scope: Deactivated successfully.
Oct 01 13:50:32 compute-0 sudo[283593]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:50:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:50:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:50:32 compute-0 ceph-mon[74802]: pgmap v1402: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:32 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:50:32 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev dcbdc4dd-a9fd-4d68-a026-0cbb44f02b4b does not exist
Oct 01 13:50:32 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev b61d6677-ebc6-406d-acee-7256aee51707 does not exist
Oct 01 13:50:32 compute-0 sudo[283762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:50:32 compute-0 sudo[283762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:32 compute-0 sudo[283762]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:32 compute-0 sudo[283787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:50:32 compute-0 sudo[283787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:50:32 compute-0 sudo[283787]: pam_unix(sudo:session): session closed for user root
Oct 01 13:50:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:50:33 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:50:33 compute-0 podman[283814]: 2025-10-01 13:50:33.522314718 +0000 UTC m=+0.074438306 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:50:33 compute-0 podman[283813]: 2025-10-01 13:50:33.522173873 +0000 UTC m=+0.074086704 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2)
Oct 01 13:50:33 compute-0 podman[283815]: 2025-10-01 13:50:33.537889713 +0000 UTC m=+0.083225916 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 01 13:50:33 compute-0 podman[283812]: 2025-10-01 13:50:33.619482496 +0000 UTC m=+0.172054498 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 13:50:34 compute-0 ceph-mon[74802]: pgmap v1403: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:36 compute-0 ceph-mon[74802]: pgmap v1404: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:38 compute-0 ceph-mon[74802]: pgmap v1405: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:39 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:50:39.839 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:50:39 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:50:39.841 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:50:40 compute-0 ceph-mon[74802]: pgmap v1406: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:42 compute-0 ceph-mon[74802]: pgmap v1407: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:44 compute-0 ceph-mon[74802]: pgmap v1408: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:45 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:50:45.843 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:50:46 compute-0 ceph-mon[74802]: pgmap v1409: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:50:47
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'images', 'backups', 'vms', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 01 13:50:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:50:48 compute-0 ceph-mon[74802]: pgmap v1410: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:49 compute-0 ceph-mon[74802]: pgmap v1411: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:51 compute-0 ceph-mon[74802]: pgmap v1412: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:53 compute-0 sshd-session[282927]: error: kex_exchange_identification: read: Connection timed out
Oct 01 13:50:53 compute-0 sshd-session[282927]: banner exchange: Connection from 14.103.127.7 port 50732: Connection timed out
Oct 01 13:50:53 compute-0 ceph-mon[74802]: pgmap v1413: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:50:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1191837026' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:50:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:50:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1191837026' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:50:55 compute-0 ceph-mon[74802]: pgmap v1414: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1191837026' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:50:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1191837026' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:50:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:50:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:50:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:50:57 compute-0 ceph-mon[74802]: pgmap v1415: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:50:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:00 compute-0 ceph-mon[74802]: pgmap v1416: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:02 compute-0 ceph-mon[74802]: pgmap v1417: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:04 compute-0 ceph-mon[74802]: pgmap v1418: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:04 compute-0 nova_compute[260022]: 2025-10-01 13:51:04.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:04 compute-0 podman[283907]: 2025-10-01 13:51:04.536040256 +0000 UTC m=+0.061814605 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 13:51:04 compute-0 podman[283895]: 2025-10-01 13:51:04.53613763 +0000 UTC m=+0.082780431 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd)
Oct 01 13:51:04 compute-0 podman[283894]: 2025-10-01 13:51:04.553180071 +0000 UTC m=+0.102924671 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 13:51:04 compute-0 podman[283896]: 2025-10-01 13:51:04.579825158 +0000 UTC m=+0.111453363 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:51:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:06 compute-0 ceph-mon[74802]: pgmap v1419: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:08 compute-0 ceph-mon[74802]: pgmap v1420: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 01 13:51:09 compute-0 nova_compute[260022]: 2025-10-01 13:51:09.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:09 compute-0 nova_compute[260022]: 2025-10-01 13:51:09.402 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:51:09 compute-0 nova_compute[260022]: 2025-10-01 13:51:09.403 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:51:09 compute-0 nova_compute[260022]: 2025-10-01 13:51:09.403 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:51:09 compute-0 nova_compute[260022]: 2025-10-01 13:51:09.403 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:51:09 compute-0 nova_compute[260022]: 2025-10-01 13:51:09.404 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:51:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:51:09 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685868125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:51:09 compute-0 nova_compute[260022]: 2025-10-01 13:51:09.833 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.013 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.015 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.015 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.016 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:51:10 compute-0 ceph-mon[74802]: pgmap v1421: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 01 13:51:10 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2685868125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.347 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.347 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.348 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.368 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.482 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.482 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.517 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.535 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 13:51:10 compute-0 nova_compute[260022]: 2025-10-01 13:51:10.565 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:51:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 01 13:51:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:51:10 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134761460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:51:11 compute-0 nova_compute[260022]: 2025-10-01 13:51:11.011 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:51:11 compute-0 nova_compute[260022]: 2025-10-01 13:51:11.016 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:51:11 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3134761460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:51:11 compute-0 nova_compute[260022]: 2025-10-01 13:51:11.129 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:51:11 compute-0 nova_compute[260022]: 2025-10-01 13:51:11.132 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:51:11 compute-0 nova_compute[260022]: 2025-10-01 13:51:11.132 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:51:12 compute-0 ceph-mon[74802]: pgmap v1422: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct 01 13:51:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:51:12.318 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:51:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:51:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:51:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:51:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:51:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.582631) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672582674, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1024, "num_deletes": 250, "total_data_size": 1466643, "memory_usage": 1495640, "flush_reason": "Manual Compaction"}
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672735325, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 874397, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27827, "largest_seqno": 28850, "table_properties": {"data_size": 870507, "index_size": 1542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10356, "raw_average_key_size": 20, "raw_value_size": 862091, "raw_average_value_size": 1717, "num_data_blocks": 70, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326573, "oldest_key_time": 1759326573, "file_creation_time": 1759326672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 152760 microseconds, and 3466 cpu microseconds.
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.735396) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 874397 bytes OK
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.735425) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.748367) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.748395) EVENT_LOG_v1 {"time_micros": 1759326672748386, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.748422) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1461841, prev total WAL file size 1461841, number of live WAL files 2.
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.749615) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(853KB)], [62(9156KB)]
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672749666, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10250353, "oldest_snapshot_seqno": -1}
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5164 keys, 7529128 bytes, temperature: kUnknown
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672863985, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7529128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7495574, "index_size": 19556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 128218, "raw_average_key_size": 24, "raw_value_size": 7403193, "raw_average_value_size": 1433, "num_data_blocks": 810, "num_entries": 5164, "num_filter_entries": 5164, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.864345) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7529128 bytes
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.865890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.5 rd, 65.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.9 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(20.3) write-amplify(8.6) OK, records in: 5632, records dropped: 468 output_compression: NoCompression
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.865913) EVENT_LOG_v1 {"time_micros": 1759326672865902, "job": 34, "event": "compaction_finished", "compaction_time_micros": 114473, "compaction_time_cpu_micros": 34474, "output_level": 6, "num_output_files": 1, "total_output_size": 7529128, "num_input_records": 5632, "num_output_records": 5164, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672866232, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326672868467, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.749463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:51:12 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:51:12.868645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:51:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:51:13 compute-0 nova_compute[260022]: 2025-10-01 13:51:13.134 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:13 compute-0 nova_compute[260022]: 2025-10-01 13:51:13.135 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:51:13 compute-0 nova_compute[260022]: 2025-10-01 13:51:13.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:13 compute-0 ceph-mon[74802]: pgmap v1423: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:51:13 compute-0 nova_compute[260022]: 2025-10-01 13:51:13.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:14 compute-0 nova_compute[260022]: 2025-10-01 13:51:14.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:51:15 compute-0 nova_compute[260022]: 2025-10-01 13:51:15.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:15 compute-0 ceph-mon[74802]: pgmap v1424: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:51:16 compute-0 nova_compute[260022]: 2025-10-01 13:51:16.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:16 compute-0 nova_compute[260022]: 2025-10-01 13:51:16.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:51:16 compute-0 nova_compute[260022]: 2025-10-01 13:51:16.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:51:16 compute-0 nova_compute[260022]: 2025-10-01 13:51:16.374 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:51:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:51:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:51:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:51:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:51:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:51:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:51:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:51:17 compute-0 ceph-mon[74802]: pgmap v1425: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:51:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:51:20 compute-0 ceph-mon[74802]: pgmap v1426: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 13:51:20 compute-0 nova_compute[260022]: 2025-10-01 13:51:20.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:20 compute-0 nova_compute[260022]: 2025-10-01 13:51:20.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:51:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:51:22 compute-0 ceph-mon[74802]: pgmap v1427: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:51:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:51:24 compute-0 ceph-mon[74802]: pgmap v1428: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct 01 13:51:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:26 compute-0 ceph-mon[74802]: pgmap v1429: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:28 compute-0 ceph-mon[74802]: pgmap v1430: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:30 compute-0 ceph-mon[74802]: pgmap v1431: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:32 compute-0 ceph-mon[74802]: pgmap v1432: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:32 compute-0 sudo[284015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:51:32 compute-0 sudo[284015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:32 compute-0 sudo[284015]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:32 compute-0 sudo[284040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:51:32 compute-0 sudo[284040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:32 compute-0 sudo[284040]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:32 compute-0 sudo[284065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:51:32 compute-0 sudo[284065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:32 compute-0 sudo[284065]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:32 compute-0 sudo[284090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:51:32 compute-0 sudo[284090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:33 compute-0 sudo[284090]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:51:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:51:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:51:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:51:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:51:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:51:33 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 00ed4a34-4ead-4fef-9705-6c6c18ee4b13 does not exist
Oct 01 13:51:33 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2e934731-a173-4b6f-be5b-e08f3cd48b72 does not exist
Oct 01 13:51:33 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 1682542a-6cd6-4a8e-becf-afe2f83c2db9 does not exist
Oct 01 13:51:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:51:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:51:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:51:33 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:51:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:51:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:51:33 compute-0 sudo[284147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:51:33 compute-0 sudo[284147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:33 compute-0 sudo[284147]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:33 compute-0 sudo[284172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:51:33 compute-0 sudo[284172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:33 compute-0 sudo[284172]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:33 compute-0 sudo[284197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:51:33 compute-0 sudo[284197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:33 compute-0 sudo[284197]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:33 compute-0 sudo[284222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:51:33 compute-0 sudo[284222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:34 compute-0 ceph-mon[74802]: pgmap v1433: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:51:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:51:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:51:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:51:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:51:34 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:51:34 compute-0 podman[284287]: 2025-10-01 13:51:34.179171805 +0000 UTC m=+0.047965355 container create 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:51:34 compute-0 systemd[1]: Started libpod-conmon-40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28.scope.
Oct 01 13:51:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:51:34 compute-0 podman[284287]: 2025-10-01 13:51:34.157263369 +0000 UTC m=+0.026056949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:51:34 compute-0 podman[284287]: 2025-10-01 13:51:34.274497044 +0000 UTC m=+0.143290674 container init 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:51:34 compute-0 podman[284287]: 2025-10-01 13:51:34.286555787 +0000 UTC m=+0.155349347 container start 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 13:51:34 compute-0 podman[284287]: 2025-10-01 13:51:34.290866694 +0000 UTC m=+0.159660274 container attach 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:51:34 compute-0 gracious_dirac[284304]: 167 167
Oct 01 13:51:34 compute-0 systemd[1]: libpod-40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28.scope: Deactivated successfully.
Oct 01 13:51:34 compute-0 conmon[284304]: conmon 40c70cd9599215b085d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28.scope/container/memory.events
Oct 01 13:51:34 compute-0 podman[284287]: 2025-10-01 13:51:34.296898946 +0000 UTC m=+0.165692536 container died 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 13:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-47e7934eb05872890f02f5aa9663130eadfff80c5388599286459a81bc6df5ab-merged.mount: Deactivated successfully.
Oct 01 13:51:34 compute-0 podman[284287]: 2025-10-01 13:51:34.359513815 +0000 UTC m=+0.228307405 container remove 40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_dirac, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct 01 13:51:34 compute-0 systemd[1]: libpod-conmon-40c70cd9599215b085d5eac57ddd5132a9b61c235262bf6fa2949a9d53c40f28.scope: Deactivated successfully.
Oct 01 13:51:34 compute-0 podman[284328]: 2025-10-01 13:51:34.57241003 +0000 UTC m=+0.072472244 container create c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 13:51:34 compute-0 systemd[1]: Started libpod-conmon-c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2.scope.
Oct 01 13:51:34 compute-0 podman[284328]: 2025-10-01 13:51:34.545682651 +0000 UTC m=+0.045744965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:51:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:34 compute-0 podman[284328]: 2025-10-01 13:51:34.675819825 +0000 UTC m=+0.175882099 container init c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:51:34 compute-0 podman[284328]: 2025-10-01 13:51:34.686451174 +0000 UTC m=+0.186513418 container start c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:51:34 compute-0 podman[284328]: 2025-10-01 13:51:34.690774101 +0000 UTC m=+0.190836355 container attach c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:51:34 compute-0 podman[284345]: 2025-10-01 13:51:34.734776229 +0000 UTC m=+0.104713628 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:51:34 compute-0 podman[284346]: 2025-10-01 13:51:34.734625214 +0000 UTC m=+0.099786422 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct 01 13:51:34 compute-0 podman[284348]: 2025-10-01 13:51:34.753716611 +0000 UTC m=+0.117736602 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 01 13:51:34 compute-0 podman[284342]: 2025-10-01 13:51:34.76186191 +0000 UTC m=+0.131695146 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller)
Oct 01 13:51:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:35 compute-0 sad_bell[284347]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:51:35 compute-0 sad_bell[284347]: --> relative data size: 1.0
Oct 01 13:51:35 compute-0 sad_bell[284347]: --> All data devices are unavailable
Oct 01 13:51:35 compute-0 systemd[1]: libpod-c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2.scope: Deactivated successfully.
Oct 01 13:51:35 compute-0 systemd[1]: libpod-c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2.scope: Consumed 1.238s CPU time.
Oct 01 13:51:35 compute-0 podman[284328]: 2025-10-01 13:51:35.966103765 +0000 UTC m=+1.466166019 container died c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3782d9e02054e42a3449bbb528115879b833b37340220cf4f54bc8c95f68f89-merged.mount: Deactivated successfully.
Oct 01 13:51:36 compute-0 podman[284328]: 2025-10-01 13:51:36.034744845 +0000 UTC m=+1.534807059 container remove c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:51:36 compute-0 systemd[1]: libpod-conmon-c6cc770a4c0c6374b9b7c1c1486200d2597c05d04b04c5156ff8413760b5ead2.scope: Deactivated successfully.
Oct 01 13:51:36 compute-0 sudo[284222]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:36 compute-0 ceph-mon[74802]: pgmap v1434: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:36 compute-0 sudo[284467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:51:36 compute-0 sudo[284467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:36 compute-0 sudo[284467]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:36 compute-0 sudo[284492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:51:36 compute-0 sudo[284492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:36 compute-0 sudo[284492]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:36 compute-0 sudo[284517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:51:36 compute-0 sudo[284517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:36 compute-0 sudo[284517]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:36 compute-0 sudo[284542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:51:36 compute-0 sudo[284542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:36 compute-0 podman[284607]: 2025-10-01 13:51:36.93591177 +0000 UTC m=+0.068472876 container create c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:51:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:36 compute-0 systemd[1]: Started libpod-conmon-c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff.scope.
Oct 01 13:51:37 compute-0 podman[284607]: 2025-10-01 13:51:36.907572199 +0000 UTC m=+0.040133365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:51:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:51:37 compute-0 podman[284607]: 2025-10-01 13:51:37.039221382 +0000 UTC m=+0.171782548 container init c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:51:37 compute-0 podman[284607]: 2025-10-01 13:51:37.050450799 +0000 UTC m=+0.183011915 container start c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:51:37 compute-0 podman[284607]: 2025-10-01 13:51:37.054660373 +0000 UTC m=+0.187221489 container attach c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:51:37 compute-0 peaceful_antonelli[284623]: 167 167
Oct 01 13:51:37 compute-0 systemd[1]: libpod-c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff.scope: Deactivated successfully.
Oct 01 13:51:37 compute-0 podman[284607]: 2025-10-01 13:51:37.060039994 +0000 UTC m=+0.192601140 container died c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1165e90d372ad06ca503a2765e2ce4c1b41f7b1d3a0c0b6189ada5b1cfcb089d-merged.mount: Deactivated successfully.
Oct 01 13:51:37 compute-0 podman[284607]: 2025-10-01 13:51:37.152136911 +0000 UTC m=+0.284697987 container remove c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:51:37 compute-0 systemd[1]: libpod-conmon-c049a7e73b83a4774257a8e7634dc11cd49b1c41b4128433ecefbe580ad57cff.scope: Deactivated successfully.
Oct 01 13:51:37 compute-0 podman[284645]: 2025-10-01 13:51:37.365876293 +0000 UTC m=+0.068262241 container create e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:51:37 compute-0 systemd[1]: Started libpod-conmon-e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed.scope.
Oct 01 13:51:37 compute-0 podman[284645]: 2025-10-01 13:51:37.337132669 +0000 UTC m=+0.039518677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:51:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:37 compute-0 podman[284645]: 2025-10-01 13:51:37.478335615 +0000 UTC m=+0.180721593 container init e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:51:37 compute-0 podman[284645]: 2025-10-01 13:51:37.491707201 +0000 UTC m=+0.194093149 container start e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:51:37 compute-0 podman[284645]: 2025-10-01 13:51:37.495517321 +0000 UTC m=+0.197903289 container attach e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:51:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:38 compute-0 ceph-mon[74802]: pgmap v1435: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]: {
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:     "0": [
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:         {
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "devices": [
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "/dev/loop3"
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             ],
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_name": "ceph_lv0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_size": "21470642176",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "name": "ceph_lv0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "tags": {
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.cluster_name": "ceph",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.crush_device_class": "",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.encrypted": "0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.osd_id": "0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.type": "block",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.vdo": "0"
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             },
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "type": "block",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "vg_name": "ceph_vg0"
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:         }
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:     ],
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:     "1": [
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:         {
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "devices": [
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "/dev/loop4"
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             ],
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_name": "ceph_lv1",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_size": "21470642176",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "name": "ceph_lv1",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "tags": {
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.cluster_name": "ceph",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.crush_device_class": "",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.encrypted": "0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.osd_id": "1",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.type": "block",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.vdo": "0"
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             },
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "type": "block",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "vg_name": "ceph_vg1"
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:         }
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:     ],
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:     "2": [
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:         {
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "devices": [
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "/dev/loop5"
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             ],
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_name": "ceph_lv2",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_size": "21470642176",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "name": "ceph_lv2",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "tags": {
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.cluster_name": "ceph",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.crush_device_class": "",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.encrypted": "0",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.osd_id": "2",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.type": "block",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:                 "ceph.vdo": "0"
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             },
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "type": "block",
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:             "vg_name": "ceph_vg2"
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:         }
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]:     ]
Oct 01 13:51:38 compute-0 eloquent_elgamal[284661]: }
Oct 01 13:51:38 compute-0 systemd[1]: libpod-e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed.scope: Deactivated successfully.
Oct 01 13:51:38 compute-0 podman[284645]: 2025-10-01 13:51:38.273542713 +0000 UTC m=+0.975928671 container died e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 13:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-53f83d7d81a7fe849939df0a446f0efc7cf10863a39f6469333c12ba9e0cbd07-merged.mount: Deactivated successfully.
Oct 01 13:51:38 compute-0 podman[284645]: 2025-10-01 13:51:38.362071576 +0000 UTC m=+1.064457534 container remove e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 13:51:38 compute-0 systemd[1]: libpod-conmon-e5e13e14bd462d5ad36d0c88730eb59e4db42cbc1891a9188574f7fe9f4d99ed.scope: Deactivated successfully.
Oct 01 13:51:38 compute-0 sudo[284542]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:38 compute-0 sudo[284684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:51:38 compute-0 sudo[284684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:38 compute-0 sudo[284684]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:38 compute-0 sudo[284709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:51:38 compute-0 sudo[284709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:38 compute-0 sudo[284709]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:38 compute-0 sudo[284734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:51:38 compute-0 sudo[284734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:38 compute-0 sudo[284734]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:38 compute-0 sudo[284759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:51:38 compute-0 sudo[284759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:39 compute-0 podman[284824]: 2025-10-01 13:51:39.268426466 +0000 UTC m=+0.051970862 container create 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:51:39 compute-0 systemd[1]: Starting dnf makecache...
Oct 01 13:51:39 compute-0 systemd[1]: Started libpod-conmon-2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768.scope.
Oct 01 13:51:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:51:39 compute-0 podman[284824]: 2025-10-01 13:51:39.24432943 +0000 UTC m=+0.027873866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:51:39 compute-0 podman[284824]: 2025-10-01 13:51:39.362619788 +0000 UTC m=+0.146164244 container init 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:51:39 compute-0 podman[284824]: 2025-10-01 13:51:39.372443761 +0000 UTC m=+0.155988157 container start 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 13:51:39 compute-0 podman[284824]: 2025-10-01 13:51:39.377047787 +0000 UTC m=+0.160592253 container attach 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:51:39 compute-0 crazy_mclean[284841]: 167 167
Oct 01 13:51:39 compute-0 systemd[1]: libpod-2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768.scope: Deactivated successfully.
Oct 01 13:51:39 compute-0 podman[284824]: 2025-10-01 13:51:39.38312125 +0000 UTC m=+0.166665666 container died 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 13:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1db7145109f903f47cff74ce17fce6062b69cb1f19d969b3b846a08daa8e3d2-merged.mount: Deactivated successfully.
Oct 01 13:51:39 compute-0 podman[284824]: 2025-10-01 13:51:39.429081891 +0000 UTC m=+0.212626247 container remove 2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:51:39 compute-0 systemd[1]: libpod-conmon-2d1c5ec8c4f656799f2e355ac97321025295de41412ad15bb6c24e05a19f5768.scope: Deactivated successfully.
Oct 01 13:51:39 compute-0 dnf[284838]: Metadata cache refreshed recently.
Oct 01 13:51:39 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 01 13:51:39 compute-0 systemd[1]: Finished dnf makecache.
Oct 01 13:51:39 compute-0 podman[284864]: 2025-10-01 13:51:39.607578572 +0000 UTC m=+0.060118851 container create 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 13:51:39 compute-0 systemd[1]: Started libpod-conmon-0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c.scope.
Oct 01 13:51:39 compute-0 podman[284864]: 2025-10-01 13:51:39.576572667 +0000 UTC m=+0.029112956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:51:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:51:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:51:39 compute-0 podman[284864]: 2025-10-01 13:51:39.692103858 +0000 UTC m=+0.144644147 container init 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:51:39 compute-0 podman[284864]: 2025-10-01 13:51:39.704061297 +0000 UTC m=+0.156601566 container start 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:51:39 compute-0 podman[284864]: 2025-10-01 13:51:39.707646162 +0000 UTC m=+0.160186431 container attach 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:51:40 compute-0 ceph-mon[74802]: pgmap v1436: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:40 compute-0 priceless_lewin[284880]: {
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "osd_id": 0,
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "type": "bluestore"
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:     },
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "osd_id": 2,
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "type": "bluestore"
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:     },
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "osd_id": 1,
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:         "type": "bluestore"
Oct 01 13:51:40 compute-0 priceless_lewin[284880]:     }
Oct 01 13:51:40 compute-0 priceless_lewin[284880]: }
Oct 01 13:51:40 compute-0 systemd[1]: libpod-0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c.scope: Deactivated successfully.
Oct 01 13:51:40 compute-0 podman[284864]: 2025-10-01 13:51:40.809943817 +0000 UTC m=+1.262484096 container died 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:51:40 compute-0 systemd[1]: libpod-0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c.scope: Consumed 1.116s CPU time.
Oct 01 13:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e4523d3bf8b4977ead63c01aa55276eedb3906c4b874799918239e30ffdbefb-merged.mount: Deactivated successfully.
Oct 01 13:51:40 compute-0 podman[284864]: 2025-10-01 13:51:40.873152216 +0000 UTC m=+1.325692455 container remove 0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:51:40 compute-0 systemd[1]: libpod-conmon-0bb749ab0acb4ef14716cc23b5342a89d829803f160d50973f06f5d09e91458c.scope: Deactivated successfully.
Oct 01 13:51:40 compute-0 sudo[284759]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:51:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:51:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:51:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:51:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 41556639-a1de-4205-9fdb-4dd58e21ed4e does not exist
Oct 01 13:51:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2bf69889-189f-4359-a1d2-40b0fcf422ca does not exist
Oct 01 13:51:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:41 compute-0 sudo[284927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:51:41 compute-0 sudo[284927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:41 compute-0 sudo[284927]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:41 compute-0 sudo[284952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:51:41 compute-0 sudo[284952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:51:41 compute-0 sudo[284952]: pam_unix(sudo:session): session closed for user root
Oct 01 13:51:41 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:51:41.525 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:51:41 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:51:41.529 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:51:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:51:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:51:41 compute-0 ceph-mon[74802]: pgmap v1437: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:43 compute-0 ceph-mon[74802]: pgmap v1438: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:45 compute-0 ceph-mon[74802]: pgmap v1439: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:51:47
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'vms', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.control']
Oct 01 13:51:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:51:47 compute-0 ceph-mon[74802]: pgmap v1440: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:51:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:49 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:51:49.531 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:51:49 compute-0 ceph-mon[74802]: pgmap v1441: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:52 compute-0 ceph-mon[74802]: pgmap v1442: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:53 compute-0 ceph-mon[74802]: pgmap v1443: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:51:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4181186576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:51:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:51:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4181186576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:51:56 compute-0 ceph-mon[74802]: pgmap v1444: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/4181186576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:51:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/4181186576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:51:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:51:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:51:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:51:58 compute-0 ceph-mon[74802]: pgmap v1445: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:51:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:00 compute-0 ceph-mon[74802]: pgmap v1446: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:02 compute-0 ceph-mon[74802]: pgmap v1447: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:04 compute-0 ceph-mon[74802]: pgmap v1448: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:04 compute-0 nova_compute[260022]: 2025-10-01 13:52:04.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:05 compute-0 podman[284979]: 2025-10-01 13:52:05.555644242 +0000 UTC m=+0.088663738 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 01 13:52:05 compute-0 podman[284978]: 2025-10-01 13:52:05.566877729 +0000 UTC m=+0.103978425 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:52:05 compute-0 podman[284983]: 2025-10-01 13:52:05.568105908 +0000 UTC m=+0.092222241 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 01 13:52:05 compute-0 podman[284977]: 2025-10-01 13:52:05.591438279 +0000 UTC m=+0.138210722 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 01 13:52:06 compute-0 ceph-mon[74802]: pgmap v1449: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:08 compute-0 ceph-mon[74802]: pgmap v1450: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:10 compute-0 ceph-mon[74802]: pgmap v1451: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:11 compute-0 nova_compute[260022]: 2025-10-01 13:52:11.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:11 compute-0 nova_compute[260022]: 2025-10-01 13:52:11.393 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:52:11 compute-0 nova_compute[260022]: 2025-10-01 13:52:11.394 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:52:11 compute-0 nova_compute[260022]: 2025-10-01 13:52:11.394 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:52:11 compute-0 nova_compute[260022]: 2025-10-01 13:52:11.394 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:52:11 compute-0 nova_compute[260022]: 2025-10-01 13:52:11.395 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:52:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:52:11 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712404413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:52:11 compute-0 nova_compute[260022]: 2025-10-01 13:52:11.825 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.025 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.026 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5102MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.027 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.027 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:52:12 compute-0 ceph-mon[74802]: pgmap v1452: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:12 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/712404413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.138 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.138 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.139 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.179 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:52:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:52:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:52:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:52:12.319 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:52:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:52:12.320 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:52:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:52:12 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413603914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.672 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.681 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.700 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.703 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:52:12 compute-0 nova_compute[260022]: 2025-10-01 13:52:12.704 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:52:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:13 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2413603914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:52:14 compute-0 ceph-mon[74802]: pgmap v1453: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:14 compute-0 nova_compute[260022]: 2025-10-01 13:52:14.705 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:14 compute-0 nova_compute[260022]: 2025-10-01 13:52:14.705 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:14 compute-0 nova_compute[260022]: 2025-10-01 13:52:14.706 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:14 compute-0 nova_compute[260022]: 2025-10-01 13:52:14.706 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:52:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:16 compute-0 ceph-mon[74802]: pgmap v1454: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:17 compute-0 nova_compute[260022]: 2025-10-01 13:52:17.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:52:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:52:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:52:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:52:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:52:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:52:18 compute-0 ceph-mon[74802]: pgmap v1455: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:18 compute-0 nova_compute[260022]: 2025-10-01 13:52:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:18 compute-0 nova_compute[260022]: 2025-10-01 13:52:18.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:52:18 compute-0 nova_compute[260022]: 2025-10-01 13:52:18.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:52:18 compute-0 nova_compute[260022]: 2025-10-01 13:52:18.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:52:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:20 compute-0 ceph-mon[74802]: pgmap v1456: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:21 compute-0 nova_compute[260022]: 2025-10-01 13:52:21.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:22 compute-0 ceph-mon[74802]: pgmap v1457: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:22 compute-0 nova_compute[260022]: 2025-10-01 13:52:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:24 compute-0 ceph-mon[74802]: pgmap v1458: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:24 compute-0 nova_compute[260022]: 2025-10-01 13:52:24.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:52:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:26 compute-0 ceph-mon[74802]: pgmap v1459: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:28 compute-0 ceph-mon[74802]: pgmap v1460: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:30 compute-0 ceph-mon[74802]: pgmap v1461: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:32 compute-0 ceph-mon[74802]: pgmap v1462: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:34 compute-0 ceph-mon[74802]: pgmap v1463: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:36 compute-0 ceph-mon[74802]: pgmap v1464: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:36 compute-0 podman[285104]: 2025-10-01 13:52:36.519168168 +0000 UTC m=+0.068164736 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:52:36 compute-0 podman[285102]: 2025-10-01 13:52:36.537127509 +0000 UTC m=+0.093312826 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:52:36 compute-0 podman[285103]: 2025-10-01 13:52:36.539635629 +0000 UTC m=+0.082525244 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd)
Oct 01 13:52:36 compute-0 podman[285105]: 2025-10-01 13:52:36.547369234 +0000 UTC m=+0.092337085 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 01 13:52:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:38 compute-0 ceph-mon[74802]: pgmap v1465: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:40 compute-0 ceph-mon[74802]: pgmap v1466: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:41 compute-0 sudo[285186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:41 compute-0 sudo[285186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:41 compute-0 sudo[285186]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:41 compute-0 sudo[285211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:52:41 compute-0 sudo[285211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:41 compute-0 sudo[285211]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:41 compute-0 sudo[285236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:41 compute-0 sudo[285236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:41 compute-0 sudo[285236]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:41 compute-0 sudo[285261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 13:52:41 compute-0 sudo[285261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:41 compute-0 sudo[285261]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:52:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:52:41 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:41 compute-0 sudo[285307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:41 compute-0 sudo[285307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:41 compute-0 sudo[285307]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:42 compute-0 sudo[285332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:52:42 compute-0 sudo[285332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:42 compute-0 sudo[285332]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:42 compute-0 sudo[285357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:42 compute-0 sudo[285357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:42 compute-0 sudo[285357]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:42 compute-0 sudo[285382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:52:42 compute-0 sudo[285382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:42 compute-0 ceph-mon[74802]: pgmap v1467: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:42 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:52:42.667 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:52:42 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:52:42.669 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:52:42 compute-0 sudo[285382]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 13:52:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:52:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:52:42 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:52:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:52:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:52:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:52:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:42 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 85bfe63c-5304-48f8-af21-367e2f011047 does not exist
Oct 01 13:52:42 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 1a40df61-6d0d-4fc2-baa1-12cab0a983b0 does not exist
Oct 01 13:52:42 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6484a043-454b-431f-a51d-6b408b647fdd does not exist
Oct 01 13:52:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:52:42 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:52:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:52:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:52:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:52:42 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:52:42 compute-0 sudo[285438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:42 compute-0 sudo[285438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:42 compute-0 sudo[285438]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:42 compute-0 sudo[285463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:52:42 compute-0 sudo[285463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:42 compute-0 sudo[285463]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:43 compute-0 sudo[285488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:43 compute-0 sudo[285488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:43 compute-0 sudo[285488]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:43 compute-0 sudo[285513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:52:43 compute-0 sudo[285513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 13:52:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:52:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:52:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:52:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:52:43 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:52:43 compute-0 podman[285578]: 2025-10-01 13:52:43.509199376 +0000 UTC m=+0.062934581 container create 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:52:43 compute-0 podman[285578]: 2025-10-01 13:52:43.4743727 +0000 UTC m=+0.028107945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:52:43 compute-0 systemd[1]: Started libpod-conmon-273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e.scope.
Oct 01 13:52:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:52:43 compute-0 podman[285578]: 2025-10-01 13:52:43.65912169 +0000 UTC m=+0.212856945 container init 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:52:43 compute-0 podman[285578]: 2025-10-01 13:52:43.672674791 +0000 UTC m=+0.226409996 container start 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:52:43 compute-0 interesting_ganguly[285595]: 167 167
Oct 01 13:52:43 compute-0 systemd[1]: libpod-273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e.scope: Deactivated successfully.
Oct 01 13:52:43 compute-0 podman[285578]: 2025-10-01 13:52:43.690997462 +0000 UTC m=+0.244732717 container attach 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:52:43 compute-0 podman[285578]: 2025-10-01 13:52:43.692858862 +0000 UTC m=+0.246594067 container died 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f8998d877eef755af9c10102488ea4e60be908b9c9b07119c8da46cd2195707-merged.mount: Deactivated successfully.
Oct 01 13:52:43 compute-0 podman[285578]: 2025-10-01 13:52:43.870388542 +0000 UTC m=+0.424123747 container remove 273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:52:43 compute-0 systemd[1]: libpod-conmon-273336633c83c175edd2d127566b408e88f21ff25c6ff812705a80614088543e.scope: Deactivated successfully.
Oct 01 13:52:44 compute-0 podman[285621]: 2025-10-01 13:52:44.133936247 +0000 UTC m=+0.068158206 container create 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:52:44 compute-0 podman[285621]: 2025-10-01 13:52:44.104179212 +0000 UTC m=+0.038401221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:52:44 compute-0 systemd[1]: Started libpod-conmon-27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25.scope.
Oct 01 13:52:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:44 compute-0 ceph-mon[74802]: pgmap v1468: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:44 compute-0 podman[285621]: 2025-10-01 13:52:44.301678777 +0000 UTC m=+0.235900716 container init 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:52:44 compute-0 podman[285621]: 2025-10-01 13:52:44.31343068 +0000 UTC m=+0.247652629 container start 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:52:44 compute-0 podman[285621]: 2025-10-01 13:52:44.328577502 +0000 UTC m=+0.262799461 container attach 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:52:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:45 compute-0 wonderful_stonebraker[285638]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:52:45 compute-0 wonderful_stonebraker[285638]: --> relative data size: 1.0
Oct 01 13:52:45 compute-0 wonderful_stonebraker[285638]: --> All data devices are unavailable
Oct 01 13:52:45 compute-0 systemd[1]: libpod-27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25.scope: Deactivated successfully.
Oct 01 13:52:45 compute-0 podman[285621]: 2025-10-01 13:52:45.51527472 +0000 UTC m=+1.449496729 container died 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:52:45 compute-0 systemd[1]: libpod-27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25.scope: Consumed 1.142s CPU time.
Oct 01 13:52:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf3a9846de6bde719043a184ff9f03d0628b0f3876e560bcd7cb7b28405b7bf4-merged.mount: Deactivated successfully.
Oct 01 13:52:45 compute-0 podman[285621]: 2025-10-01 13:52:45.608303806 +0000 UTC m=+1.542525725 container remove 27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_stonebraker, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:52:45 compute-0 systemd[1]: libpod-conmon-27e38a2f448f39989c13217082bd8e9f61e66b62a5dff58262cf539848b69b25.scope: Deactivated successfully.
Oct 01 13:52:45 compute-0 sudo[285513]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:45 compute-0 sudo[285681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:45 compute-0 sudo[285681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:45 compute-0 sudo[285681]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:45 compute-0 sudo[285706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:52:45 compute-0 sudo[285706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:45 compute-0 sudo[285706]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:45 compute-0 sudo[285731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:45 compute-0 sudo[285731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:45 compute-0 sudo[285731]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:45 compute-0 sudo[285756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:52:45 compute-0 sudo[285756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:46 compute-0 podman[285821]: 2025-10-01 13:52:46.258597108 +0000 UTC m=+0.063566780 container create d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:52:46 compute-0 systemd[1]: Started libpod-conmon-d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41.scope.
Oct 01 13:52:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct 01 13:52:46 compute-0 ceph-mon[74802]: pgmap v1469: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct 01 13:52:46 compute-0 podman[285821]: 2025-10-01 13:52:46.230892258 +0000 UTC m=+0.035861980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:52:46 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct 01 13:52:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:52:46 compute-0 podman[285821]: 2025-10-01 13:52:46.359480244 +0000 UTC m=+0.164449896 container init d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct 01 13:52:46 compute-0 podman[285821]: 2025-10-01 13:52:46.368967086 +0000 UTC m=+0.173936738 container start d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:52:46 compute-0 podman[285821]: 2025-10-01 13:52:46.372702255 +0000 UTC m=+0.177671927 container attach d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:52:46 compute-0 mystifying_bassi[285837]: 167 167
Oct 01 13:52:46 compute-0 systemd[1]: libpod-d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41.scope: Deactivated successfully.
Oct 01 13:52:46 compute-0 podman[285842]: 2025-10-01 13:52:46.451006313 +0000 UTC m=+0.045317971 container died d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cfadb01dd6bf224aecec6ccd1f3d807073f9a518ac57795aabad7cd5830f937-merged.mount: Deactivated successfully.
Oct 01 13:52:46 compute-0 podman[285842]: 2025-10-01 13:52:46.494250947 +0000 UTC m=+0.088562635 container remove d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:52:46 compute-0 systemd[1]: libpod-conmon-d59fd20ffd03f9bc8a76fe5db4be59685e338c1de0d541b791493cea9cc78e41.scope: Deactivated successfully.
Oct 01 13:52:46 compute-0 podman[285864]: 2025-10-01 13:52:46.763780871 +0000 UTC m=+0.073580159 container create b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:52:46 compute-0 systemd[1]: Started libpod-conmon-b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53.scope.
Oct 01 13:52:46 compute-0 podman[285864]: 2025-10-01 13:52:46.735571474 +0000 UTC m=+0.045370802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:52:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:46 compute-0 podman[285864]: 2025-10-01 13:52:46.880723567 +0000 UTC m=+0.190522895 container init b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:52:46 compute-0 podman[285864]: 2025-10-01 13:52:46.894313208 +0000 UTC m=+0.204112456 container start b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 13:52:46 compute-0 podman[285864]: 2025-10-01 13:52:46.898684618 +0000 UTC m=+0.208483966 container attach b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:52:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct 01 13:52:47 compute-0 ceph-mon[74802]: osdmap e161: 3 total, 3 up, 3 in
Oct 01 13:52:47 compute-0 ceph-mon[74802]: pgmap v1471: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:52:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct 01 13:52:47 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct 01 13:52:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:47 compute-0 fervent_kirch[285880]: {
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:     "0": [
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:         {
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "devices": [
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "/dev/loop3"
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             ],
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_name": "ceph_lv0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_size": "21470642176",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "name": "ceph_lv0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "tags": {
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.cluster_name": "ceph",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.crush_device_class": "",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.encrypted": "0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.osd_id": "0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.type": "block",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.vdo": "0"
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             },
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "type": "block",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "vg_name": "ceph_vg0"
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:         }
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:     ],
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:     "1": [
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:         {
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "devices": [
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "/dev/loop4"
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             ],
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_name": "ceph_lv1",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_size": "21470642176",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "name": "ceph_lv1",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "tags": {
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.cluster_name": "ceph",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.crush_device_class": "",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.encrypted": "0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.osd_id": "1",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.type": "block",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.vdo": "0"
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             },
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "type": "block",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "vg_name": "ceph_vg1"
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:         }
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:     ],
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:     "2": [
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:         {
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "devices": [
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "/dev/loop5"
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             ],
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_name": "ceph_lv2",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_size": "21470642176",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "name": "ceph_lv2",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "tags": {
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.cluster_name": "ceph",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.crush_device_class": "",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.encrypted": "0",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.osd_id": "2",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.type": "block",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:                 "ceph.vdo": "0"
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             },
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "type": "block",
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:             "vg_name": "ceph_vg2"
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:         }
Oct 01 13:52:47 compute-0 fervent_kirch[285880]:     ]
Oct 01 13:52:47 compute-0 fervent_kirch[285880]: }
Oct 01 13:52:47 compute-0 systemd[1]: libpod-b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53.scope: Deactivated successfully.
Oct 01 13:52:47 compute-0 podman[285889]: 2025-10-01 13:52:47.821724197 +0000 UTC m=+0.045024732 container died b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3adcaf2344eb95154cf718824db69afb0fcbc99d8e4f1be50a618ccb844da2a-merged.mount: Deactivated successfully.
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:52:47
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'images', 'default.rgw.meta', 'volumes', 'backups', '.rgw.root', '.mgr']
Oct 01 13:52:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:52:47 compute-0 podman[285889]: 2025-10-01 13:52:47.892057262 +0000 UTC m=+0.115357747 container remove b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:52:47 compute-0 systemd[1]: libpod-conmon-b20393b5b5f1606521ea2d1a5523f90a2a06c219e8c1967a228158b64da4fb53.scope: Deactivated successfully.
Oct 01 13:52:47 compute-0 sudo[285756]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:48 compute-0 sudo[285904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:48 compute-0 sudo[285904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:48 compute-0 sudo[285904]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:52:48 compute-0 sudo[285929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:52:48 compute-0 sudo[285929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:52:48 compute-0 sudo[285929]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:48 compute-0 sudo[285954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:48 compute-0 sudo[285954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:48 compute-0 sudo[285954]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:48 compute-0 sudo[285979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:52:48 compute-0 sudo[285979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct 01 13:52:48 compute-0 ceph-mon[74802]: osdmap e162: 3 total, 3 up, 3 in
Oct 01 13:52:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct 01 13:52:48 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct 01 13:52:48 compute-0 podman[286044]: 2025-10-01 13:52:48.863114377 +0000 UTC m=+0.070012396 container create f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 13:52:48 compute-0 systemd[1]: Started libpod-conmon-f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195.scope.
Oct 01 13:52:48 compute-0 podman[286044]: 2025-10-01 13:52:48.832146363 +0000 UTC m=+0.039044432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:52:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:52:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.5 KiB/s wr, 49 op/s
Oct 01 13:52:48 compute-0 podman[286044]: 2025-10-01 13:52:48.981214319 +0000 UTC m=+0.188112378 container init f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:52:48 compute-0 podman[286044]: 2025-10-01 13:52:48.995625047 +0000 UTC m=+0.202523066 container start f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:52:48 compute-0 podman[286044]: 2025-10-01 13:52:48.999604693 +0000 UTC m=+0.206502712 container attach f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:52:49 compute-0 awesome_dewdney[286060]: 167 167
Oct 01 13:52:49 compute-0 systemd[1]: libpod-f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195.scope: Deactivated successfully.
Oct 01 13:52:49 compute-0 podman[286065]: 2025-10-01 13:52:49.079076899 +0000 UTC m=+0.043067140 container died f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fab645f5e88504c26179caff72aa031298b1e10989adf8d7b08d724df7ea35d5-merged.mount: Deactivated successfully.
Oct 01 13:52:49 compute-0 podman[286065]: 2025-10-01 13:52:49.120109842 +0000 UTC m=+0.084100073 container remove f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 13:52:49 compute-0 systemd[1]: libpod-conmon-f8ebcca27ecaadb2783b8930b03be660990ee78ac3f61e7144757d558f2b6195.scope: Deactivated successfully.
Oct 01 13:52:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct 01 13:52:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct 01 13:52:49 compute-0 ceph-mon[74802]: osdmap e163: 3 total, 3 up, 3 in
Oct 01 13:52:49 compute-0 ceph-mon[74802]: pgmap v1474: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.5 KiB/s wr, 49 op/s
Oct 01 13:52:49 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct 01 13:52:49 compute-0 podman[286088]: 2025-10-01 13:52:49.376936223 +0000 UTC m=+0.058009784 container create e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 13:52:49 compute-0 systemd[1]: Started libpod-conmon-e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620.scope.
Oct 01 13:52:49 compute-0 podman[286088]: 2025-10-01 13:52:49.352847748 +0000 UTC m=+0.033921409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:52:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:52:49 compute-0 podman[286088]: 2025-10-01 13:52:49.487808566 +0000 UTC m=+0.168882207 container init e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:52:49 compute-0 podman[286088]: 2025-10-01 13:52:49.501906844 +0000 UTC m=+0.182980435 container start e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:52:49 compute-0 podman[286088]: 2025-10-01 13:52:49.506524351 +0000 UTC m=+0.187597952 container attach e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:52:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct 01 13:52:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct 01 13:52:50 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct 01 13:52:50 compute-0 ceph-mon[74802]: osdmap e164: 3 total, 3 up, 3 in
Oct 01 13:52:50 compute-0 amazing_leakey[286104]: {
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "osd_id": 0,
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "type": "bluestore"
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:     },
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "osd_id": 2,
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "type": "bluestore"
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:     },
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "osd_id": 1,
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:         "type": "bluestore"
Oct 01 13:52:50 compute-0 amazing_leakey[286104]:     }
Oct 01 13:52:50 compute-0 amazing_leakey[286104]: }
Oct 01 13:52:50 compute-0 systemd[1]: libpod-e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620.scope: Deactivated successfully.
Oct 01 13:52:50 compute-0 systemd[1]: libpod-e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620.scope: Consumed 1.018s CPU time.
Oct 01 13:52:50 compute-0 podman[286088]: 2025-10-01 13:52:50.518445265 +0000 UTC m=+1.199518826 container died e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:52:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-51935fa5ce8f8f30560e29619622a47cedd11e3cbaa53ed66ffa5cab233948e8-merged.mount: Deactivated successfully.
Oct 01 13:52:50 compute-0 podman[286088]: 2025-10-01 13:52:50.581659013 +0000 UTC m=+1.262732594 container remove e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:52:50 compute-0 systemd[1]: libpod-conmon-e3c3f4e3368e4b817dc9c6bad912d3c33aec4ba100b258c79635ca46dfd84620.scope: Deactivated successfully.
Oct 01 13:52:50 compute-0 sudo[285979]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:52:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:52:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:50 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 11964950-c27b-4ac7-bd1c-a70c7b0928af does not exist
Oct 01 13:52:50 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 40ea15a9-6bc6-44b4-8bd4-2984cce723c8 does not exist
Oct 01 13:52:50 compute-0 sudo[286149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:52:50 compute-0 sudo[286149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:50 compute-0 sudo[286149]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:50 compute-0 sudo[286174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:52:50 compute-0 sudo[286174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:52:50 compute-0 sudo[286174]: pam_unix(sudo:session): session closed for user root
Oct 01 13:52:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 8.2 KiB/s wr, 74 op/s
Oct 01 13:52:51 compute-0 ceph-mon[74802]: osdmap e165: 3 total, 3 up, 3 in
Oct 01 13:52:51 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:51 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:52:51 compute-0 ceph-mon[74802]: pgmap v1477: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 8.2 KiB/s wr, 74 op/s
Oct 01 13:52:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct 01 13:52:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct 01 13:52:51 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct 01 13:52:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:52 compute-0 ceph-mon[74802]: osdmap e166: 3 total, 3 up, 3 in
Oct 01 13:52:52 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:52:52.671 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:52:52 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 16 KiB/s wr, 130 op/s
Oct 01 13:52:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct 01 13:52:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct 01 13:52:53 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct 01 13:52:53 compute-0 ceph-mon[74802]: pgmap v1479: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 16 KiB/s wr, 130 op/s
Oct 01 13:52:54 compute-0 ceph-mon[74802]: osdmap e167: 3 total, 3 up, 3 in
Oct 01 13:52:54 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 13 KiB/s wr, 107 op/s
Oct 01 13:52:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:52:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981758943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:52:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:52:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1981758943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:52:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct 01 13:52:55 compute-0 ceph-mon[74802]: pgmap v1481: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 13 KiB/s wr, 107 op/s
Oct 01 13:52:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1981758943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:52:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1981758943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:52:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct 01 13:52:55 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct 01 13:52:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 13:52:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct 01 13:52:56 compute-0 ceph-mon[74802]: osdmap e168: 3 total, 3 up, 3 in
Oct 01 13:52:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct 01 13:52:56 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct 01 13:52:56 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 14 KiB/s wr, 113 op/s
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667638841407827 of space, bias 1.0, pg target 0.2002916524223481 quantized to 32 (current 32)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:52:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:52:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:52:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct 01 13:52:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct 01 13:52:57 compute-0 ceph-mon[74802]: osdmap e169: 3 total, 3 up, 3 in
Oct 01 13:52:57 compute-0 ceph-mon[74802]: pgmap v1484: 305 pgs: 305 active+clean; 41 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 14 KiB/s wr, 113 op/s
Oct 01 13:52:57 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct 01 13:52:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct 01 13:52:58 compute-0 ceph-mon[74802]: osdmap e170: 3 total, 3 up, 3 in
Oct 01 13:52:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct 01 13:52:58 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct 01 13:52:58 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 28 MiB/s wr, 268 op/s
Oct 01 13:52:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct 01 13:52:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct 01 13:52:59 compute-0 ceph-mon[74802]: osdmap e171: 3 total, 3 up, 3 in
Oct 01 13:52:59 compute-0 ceph-mon[74802]: pgmap v1487: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 28 MiB/s wr, 268 op/s
Oct 01 13:52:59 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct 01 13:53:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct 01 13:53:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct 01 13:53:00 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct 01 13:53:00 compute-0 ceph-mon[74802]: osdmap e172: 3 total, 3 up, 3 in
Oct 01 13:53:00 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 28 MiB/s wr, 268 op/s
Oct 01 13:53:01 compute-0 ceph-mon[74802]: osdmap e173: 3 total, 3 up, 3 in
Oct 01 13:53:01 compute-0 ceph-mon[74802]: pgmap v1490: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 28 MiB/s wr, 268 op/s
Oct 01 13:53:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct 01 13:53:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct 01 13:53:02 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 13:53:02 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct 01 13:53:02 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 242 KiB/s rd, 19 KiB/s wr, 335 op/s
Oct 01 13:53:03 compute-0 ceph-mon[74802]: osdmap e174: 3 total, 3 up, 3 in
Oct 01 13:53:03 compute-0 ceph-mon[74802]: pgmap v1492: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 242 KiB/s rd, 19 KiB/s wr, 335 op/s
Oct 01 13:53:04 compute-0 nova_compute[260022]: 2025-10-01 13:53:04.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:53:04 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 14 KiB/s wr, 237 op/s
Oct 01 13:53:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct 01 13:53:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct 01 13:53:06 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct 01 13:53:06 compute-0 ceph-mon[74802]: pgmap v1493: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 14 KiB/s wr, 237 op/s
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.054457) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786054505, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1607, "num_deletes": 512, "total_data_size": 2010339, "memory_usage": 2048352, "flush_reason": "Manual Compaction"}
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786074669, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1966365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28851, "largest_seqno": 30457, "table_properties": {"data_size": 1959159, "index_size": 3768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 17955, "raw_average_key_size": 19, "raw_value_size": 1942695, "raw_average_value_size": 2077, "num_data_blocks": 168, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326673, "oldest_key_time": 1759326673, "file_creation_time": 1759326786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 20246 microseconds, and 9181 cpu microseconds.
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.074711) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1966365 bytes OK
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.074756) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.076102) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.076114) EVENT_LOG_v1 {"time_micros": 1759326786076110, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.076133) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2002166, prev total WAL file size 2002166, number of live WAL files 2.
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.077244) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1920KB)], [65(7352KB)]
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786077272, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9495493, "oldest_snapshot_seqno": -1}
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5062 keys, 7625578 bytes, temperature: kUnknown
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786114543, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7625578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7591597, "index_size": 20239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 128126, "raw_average_key_size": 25, "raw_value_size": 7499818, "raw_average_value_size": 1481, "num_data_blocks": 828, "num_entries": 5062, "num_filter_entries": 5062, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759326786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.114898) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7625578 bytes
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.116149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 254.2 rd, 204.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.2 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.7) write-amplify(3.9) OK, records in: 6099, records dropped: 1037 output_compression: NoCompression
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.116176) EVENT_LOG_v1 {"time_micros": 1759326786116164, "job": 36, "event": "compaction_finished", "compaction_time_micros": 37357, "compaction_time_cpu_micros": 18364, "output_level": 6, "num_output_files": 1, "total_output_size": 7625578, "num_input_records": 6099, "num_output_records": 5062, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786116972, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759326786119717, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.077152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:53:06 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:53:06.120208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:53:06 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 13 KiB/s wr, 228 op/s
Oct 01 13:53:07 compute-0 ceph-mon[74802]: osdmap e175: 3 total, 3 up, 3 in
Oct 01 13:53:07 compute-0 podman[286204]: 2025-10-01 13:53:07.556260481 +0000 UTC m=+0.091098157 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 01 13:53:07 compute-0 podman[286202]: 2025-10-01 13:53:07.56318033 +0000 UTC m=+0.109600824 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:53:07 compute-0 podman[286203]: 2025-10-01 13:53:07.567978193 +0000 UTC m=+0.105597237 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 13:53:07 compute-0 podman[286201]: 2025-10-01 13:53:07.595975193 +0000 UTC m=+0.142332265 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 13:53:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct 01 13:53:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct 01 13:53:07 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct 01 13:53:08 compute-0 ceph-mon[74802]: pgmap v1495: 305 pgs: 305 active+clean; 41 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 13 KiB/s wr, 228 op/s
Oct 01 13:53:08 compute-0 ceph-mon[74802]: osdmap e176: 3 total, 3 up, 3 in
Oct 01 13:53:08 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 11 KiB/s wr, 140 op/s
Oct 01 13:53:10 compute-0 ceph-mon[74802]: pgmap v1497: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 11 KiB/s wr, 140 op/s
Oct 01 13:53:10 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 13:53:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct 01 13:53:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct 01 13:53:12 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct 01 13:53:12 compute-0 ceph-mon[74802]: pgmap v1498: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 13:53:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:53:12.320 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:53:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:53:12.320 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:53:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:53:12.321 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:53:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:12 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 4.3 KiB/s wr, 57 op/s
Oct 01 13:53:13 compute-0 ceph-mon[74802]: osdmap e177: 3 total, 3 up, 3 in
Oct 01 13:53:13 compute-0 nova_compute[260022]: 2025-10-01 13:53:13.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:53:13 compute-0 nova_compute[260022]: 2025-10-01 13:53:13.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:53:13 compute-0 nova_compute[260022]: 2025-10-01 13:53:13.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:53:13 compute-0 nova_compute[260022]: 2025-10-01 13:53:13.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:53:13 compute-0 nova_compute[260022]: 2025-10-01 13:53:13.386 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:53:13 compute-0 nova_compute[260022]: 2025-10-01 13:53:13.387 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:53:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:53:13 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260530635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:53:13 compute-0 nova_compute[260022]: 2025-10-01 13:53:13.849 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:53:14 compute-0 ceph-mon[74802]: pgmap v1500: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 4.3 KiB/s wr, 57 op/s
Oct 01 13:53:14 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1260530635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.128 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.131 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.132 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.247 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.263 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.264 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.264 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.329 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:53:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:53:14 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242115474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.807 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.816 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.844 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.847 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:53:14 compute-0 nova_compute[260022]: 2025-10-01 13:53:14.847 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:53:14 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.7 KiB/s wr, 49 op/s
Oct 01 13:53:15 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4242115474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:53:16 compute-0 ceph-mon[74802]: pgmap v1501: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.7 KiB/s wr, 49 op/s
Oct 01 13:53:16 compute-0 nova_compute[260022]: 2025-10-01 13:53:16.848 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:53:16 compute-0 nova_compute[260022]: 2025-10-01 13:53:16.850 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:53:16 compute-0 nova_compute[260022]: 2025-10-01 13:53:16.850 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:53:16 compute-0 nova_compute[260022]: 2025-10-01 13:53:16.851 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:53:16 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 41 op/s
Oct 01 13:53:17 compute-0 nova_compute[260022]: 2025-10-01 13:53:17.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:53:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:53:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:53:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:53:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:53:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:53:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:53:18 compute-0 ceph-mon[74802]: pgmap v1502: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 41 op/s
Oct 01 13:53:18 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:53:20 compute-0 ceph-mon[74802]: pgmap v1503: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:53:20 compute-0 nova_compute[260022]: 2025-10-01 13:53:20.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:53:20 compute-0 nova_compute[260022]: 2025-10-01 13:53:20.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:53:20 compute-0 nova_compute[260022]: 2025-10-01 13:53:20.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:53:20 compute-0 nova_compute[260022]: 2025-10-01 13:53:20.514 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:53:20 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:53:22 compute-0 ceph-mon[74802]: pgmap v1504: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 13:53:22 compute-0 nova_compute[260022]: 2025-10-01 13:53:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:53:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct 01 13:53:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct 01 13:53:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct 01 13:53:22 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:53:23 compute-0 nova_compute[260022]: 2025-10-01 13:53:23.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:53:23 compute-0 ceph-mon[74802]: osdmap e178: 3 total, 3 up, 3 in
Oct 01 13:53:23 compute-0 ceph-mon[74802]: pgmap v1506: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:53:24 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:53:26 compute-0 ceph-mon[74802]: pgmap v1507: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:53:26 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:53:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:28 compute-0 ceph-mon[74802]: pgmap v1508: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:53:28 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:30 compute-0 ceph-mon[74802]: pgmap v1509: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:30 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:32 compute-0 ceph-mon[74802]: pgmap v1510: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:32 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:34 compute-0 ceph-mon[74802]: pgmap v1511: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:34 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:36 compute-0 ceph-mon[74802]: pgmap v1512: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:36 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:38 compute-0 ceph-mon[74802]: pgmap v1513: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:38 compute-0 podman[286336]: 2025-10-01 13:53:38.526759212 +0000 UTC m=+0.066346759 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:53:38 compute-0 podman[286334]: 2025-10-01 13:53:38.535573783 +0000 UTC m=+0.085711375 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 01 13:53:38 compute-0 podman[286335]: 2025-10-01 13:53:38.56947445 +0000 UTC m=+0.113170158 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:53:38 compute-0 podman[286333]: 2025-10-01 13:53:38.588701221 +0000 UTC m=+0.138511692 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:53:38 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:40 compute-0 ceph-mon[74802]: pgmap v1514: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:40 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:42 compute-0 ceph-mon[74802]: pgmap v1515: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:42 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:53:43.116 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:53:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:53:43.117 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:53:44 compute-0 ceph-mon[74802]: pgmap v1516: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:44 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:46 compute-0 ceph-mon[74802]: pgmap v1517: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:46 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:53:47
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', '.rgw.root', 'volumes', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta']
Oct 01 13:53:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:53:48 compute-0 ceph-mon[74802]: pgmap v1518: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2102413293
Oct 01 13:53:48 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:50 compute-0 ceph-mon[74802]: pgmap v1519: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:50 compute-0 sudo[286411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:50 compute-0 sudo[286411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:50 compute-0 sudo[286411]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:50 compute-0 sudo[286436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:53:50 compute-0 sudo[286436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:50 compute-0 sudo[286436]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:50 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:51 compute-0 sudo[286461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:51 compute-0 sudo[286461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:51 compute-0 sudo[286461]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:51 compute-0 sudo[286486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 13:53:51 compute-0 sudo[286486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:51 compute-0 podman[286583]: 2025-10-01 13:53:51.613618178 +0000 UTC m=+0.070730799 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:53:51 compute-0 podman[286583]: 2025-10-01 13:53:51.711330333 +0000 UTC m=+0.168442864 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 13:53:52 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:53:52.124 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:53:52 compute-0 ceph-mon[74802]: pgmap v1520: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:52 compute-0 sudo[286486]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:53:52 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:53:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:53:52 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:53:52 compute-0 sudo[286744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:52 compute-0 sudo[286744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:52 compute-0 sudo[286744]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:52 compute-0 sudo[286769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:53:52 compute-0 sudo[286769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:52 compute-0 sudo[286769]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:52 compute-0 sudo[286794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:52 compute-0 sudo[286794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:52 compute-0 sudo[286794]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:52 compute-0 sudo[286819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:53:52 compute-0 sudo[286819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:53 compute-0 sudo[286819]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:53 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:53:53 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:53:53 compute-0 ceph-mon[74802]: pgmap v1521: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:53:53 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:53:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:53:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:53:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:53:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:53:53 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 4461026a-2da3-40a4-bcb3-b8048a76bbef does not exist
Oct 01 13:53:53 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2dd288d1-fa30-46f1-9d66-d548258e4640 does not exist
Oct 01 13:53:53 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 687be7e7-6f97-4dbf-a935-e64a6ea7305f does not exist
Oct 01 13:53:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:53:53 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:53:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:53:53 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:53:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:53:53 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:53:53 compute-0 sudo[286875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:53 compute-0 sudo[286875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:53 compute-0 sudo[286875]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:53 compute-0 sudo[286900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:53:53 compute-0 sudo[286900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:53 compute-0 sudo[286900]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:53 compute-0 sudo[286925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:53 compute-0 sudo[286925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:53 compute-0 sudo[286925]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:53 compute-0 sudo[286950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:53:53 compute-0 sudo[286950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:54 compute-0 podman[287013]: 2025-10-01 13:53:54.381220718 +0000 UTC m=+0.063690655 container create 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:53:54 compute-0 systemd[1]: Started libpod-conmon-2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7.scope.
Oct 01 13:53:54 compute-0 podman[287013]: 2025-10-01 13:53:54.354283822 +0000 UTC m=+0.036753799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:53:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:53:54 compute-0 podman[287013]: 2025-10-01 13:53:54.481563796 +0000 UTC m=+0.164033783 container init 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:53:54 compute-0 podman[287013]: 2025-10-01 13:53:54.492410621 +0000 UTC m=+0.174880548 container start 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:53:54 compute-0 podman[287013]: 2025-10-01 13:53:54.496539042 +0000 UTC m=+0.179008969 container attach 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:53:54 compute-0 pensive_swartz[287030]: 167 167
Oct 01 13:53:54 compute-0 systemd[1]: libpod-2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7.scope: Deactivated successfully.
Oct 01 13:53:54 compute-0 podman[287013]: 2025-10-01 13:53:54.5015122 +0000 UTC m=+0.183982137 container died 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:53:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:53:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:53:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:53:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:53:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:53:54 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee0d854e017ce7f661e23df3f8dc27a4b53ef942e4bf404e5a1f05fff56711dd-merged.mount: Deactivated successfully.
Oct 01 13:53:54 compute-0 podman[287013]: 2025-10-01 13:53:54.5603611 +0000 UTC m=+0.242831027 container remove 2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swartz, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:53:54 compute-0 systemd[1]: libpod-conmon-2a4ca8786112616b94b68142c7385150b42b06e0d139558b2bba4204a76f05c7.scope: Deactivated successfully.
Oct 01 13:53:54 compute-0 podman[287052]: 2025-10-01 13:53:54.816597173 +0000 UTC m=+0.064382087 container create 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:53:54 compute-0 systemd[1]: Started libpod-conmon-9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466.scope.
Oct 01 13:53:54 compute-0 podman[287052]: 2025-10-01 13:53:54.791506125 +0000 UTC m=+0.039291029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:53:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:54 compute-0 podman[287052]: 2025-10-01 13:53:54.934038054 +0000 UTC m=+0.181823028 container init 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:53:54 compute-0 podman[287052]: 2025-10-01 13:53:54.955553388 +0000 UTC m=+0.203338302 container start 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 13:53:54 compute-0 podman[287052]: 2025-10-01 13:53:54.959891685 +0000 UTC m=+0.207676599 container attach 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 13:53:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:53:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108378686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:53:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:53:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108378686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:53:55 compute-0 ceph-mon[74802]: pgmap v1522: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3108378686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:53:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3108378686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:53:56 compute-0 eloquent_thompson[287069]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:53:56 compute-0 eloquent_thompson[287069]: --> relative data size: 1.0
Oct 01 13:53:56 compute-0 eloquent_thompson[287069]: --> All data devices are unavailable
Oct 01 13:53:56 compute-0 systemd[1]: libpod-9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466.scope: Deactivated successfully.
Oct 01 13:53:56 compute-0 systemd[1]: libpod-9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466.scope: Consumed 1.087s CPU time.
Oct 01 13:53:56 compute-0 podman[287052]: 2025-10-01 13:53:56.090909893 +0000 UTC m=+1.338694777 container died 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf349d81051b2294b7d7800a8da4f9c9450a339b0cc48d3a940252cf55d9bb3-merged.mount: Deactivated successfully.
Oct 01 13:53:56 compute-0 podman[287052]: 2025-10-01 13:53:56.15059731 +0000 UTC m=+1.398382244 container remove 9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:53:56 compute-0 systemd[1]: libpod-conmon-9489b3c26c5960adc47117283d677670dd00804ecf2b9bccbf121496c5c63466.scope: Deactivated successfully.
Oct 01 13:53:56 compute-0 sudo[286950]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:56 compute-0 sudo[287112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:56 compute-0 sudo[287112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:56 compute-0 sudo[287112]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:56 compute-0 sudo[287137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:53:56 compute-0 sudo[287137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:56 compute-0 sudo[287137]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:56 compute-0 sudo[287162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:56 compute-0 sudo[287162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:56 compute-0 sudo[287162]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:56 compute-0 sudo[287187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:53:56 compute-0 sudo[287187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:56 compute-0 podman[287253]: 2025-10-01 13:53:56.988127283 +0000 UTC m=+0.064801201 container create 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:57 compute-0 systemd[1]: Started libpod-conmon-268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc.scope.
Oct 01 13:53:57 compute-0 podman[287253]: 2025-10-01 13:53:56.965007638 +0000 UTC m=+0.041681596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:53:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:53:57 compute-0 podman[287253]: 2025-10-01 13:53:57.084363401 +0000 UTC m=+0.161037329 container init 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:53:57 compute-0 podman[287253]: 2025-10-01 13:53:57.094395629 +0000 UTC m=+0.171069507 container start 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 13:53:57 compute-0 podman[287253]: 2025-10-01 13:53:57.09817624 +0000 UTC m=+0.174850158 container attach 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:53:57 compute-0 stoic_varahamihira[287268]: 167 167
Oct 01 13:53:57 compute-0 systemd[1]: libpod-268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc.scope: Deactivated successfully.
Oct 01 13:53:57 compute-0 podman[287253]: 2025-10-01 13:53:57.105076878 +0000 UTC m=+0.181750796 container died 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:53:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b96c131f4cfcc5f39644f582e1ff2f68f206289f674b40dfdac88a3d66b32698-merged.mount: Deactivated successfully.
Oct 01 13:53:57 compute-0 podman[287253]: 2025-10-01 13:53:57.153043263 +0000 UTC m=+0.229717151 container remove 268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 13:53:57 compute-0 systemd[1]: libpod-conmon-268a41d55feb6abdd2953e86e215be2a1c0cf2629bb2dc6aef49887961155bfc.scope: Deactivated successfully.
Oct 01 13:53:57 compute-0 podman[287291]: 2025-10-01 13:53:57.398722249 +0000 UTC m=+0.056780635 container create 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:53:57 compute-0 systemd[1]: Started libpod-conmon-627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067.scope.
Oct 01 13:53:57 compute-0 podman[287291]: 2025-10-01 13:53:57.371434312 +0000 UTC m=+0.029492748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:53:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:53:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:53:57 compute-0 podman[287291]: 2025-10-01 13:53:57.502744114 +0000 UTC m=+0.160802480 container init 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:53:57 compute-0 podman[287291]: 2025-10-01 13:53:57.513156755 +0000 UTC m=+0.171215111 container start 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 13:53:57 compute-0 podman[287291]: 2025-10-01 13:53:57.516390738 +0000 UTC m=+0.174449094 container attach 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:53:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:53:58 compute-0 ceph-mon[74802]: pgmap v1523: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:58 compute-0 clever_panini[287307]: {
Oct 01 13:53:58 compute-0 clever_panini[287307]:     "0": [
Oct 01 13:53:58 compute-0 clever_panini[287307]:         {
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "devices": [
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "/dev/loop3"
Oct 01 13:53:58 compute-0 clever_panini[287307]:             ],
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_name": "ceph_lv0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_size": "21470642176",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "name": "ceph_lv0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "tags": {
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.cluster_name": "ceph",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.crush_device_class": "",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.encrypted": "0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.osd_id": "0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.type": "block",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.vdo": "0"
Oct 01 13:53:58 compute-0 clever_panini[287307]:             },
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "type": "block",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "vg_name": "ceph_vg0"
Oct 01 13:53:58 compute-0 clever_panini[287307]:         }
Oct 01 13:53:58 compute-0 clever_panini[287307]:     ],
Oct 01 13:53:58 compute-0 clever_panini[287307]:     "1": [
Oct 01 13:53:58 compute-0 clever_panini[287307]:         {
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "devices": [
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "/dev/loop4"
Oct 01 13:53:58 compute-0 clever_panini[287307]:             ],
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_name": "ceph_lv1",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_size": "21470642176",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "name": "ceph_lv1",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "tags": {
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.cluster_name": "ceph",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.crush_device_class": "",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.encrypted": "0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.osd_id": "1",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.type": "block",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.vdo": "0"
Oct 01 13:53:58 compute-0 clever_panini[287307]:             },
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "type": "block",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "vg_name": "ceph_vg1"
Oct 01 13:53:58 compute-0 clever_panini[287307]:         }
Oct 01 13:53:58 compute-0 clever_panini[287307]:     ],
Oct 01 13:53:58 compute-0 clever_panini[287307]:     "2": [
Oct 01 13:53:58 compute-0 clever_panini[287307]:         {
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "devices": [
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "/dev/loop5"
Oct 01 13:53:58 compute-0 clever_panini[287307]:             ],
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_name": "ceph_lv2",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_size": "21470642176",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "name": "ceph_lv2",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "tags": {
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.cluster_name": "ceph",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.crush_device_class": "",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.encrypted": "0",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.osd_id": "2",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.type": "block",
Oct 01 13:53:58 compute-0 clever_panini[287307]:                 "ceph.vdo": "0"
Oct 01 13:53:58 compute-0 clever_panini[287307]:             },
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "type": "block",
Oct 01 13:53:58 compute-0 clever_panini[287307]:             "vg_name": "ceph_vg2"
Oct 01 13:53:58 compute-0 clever_panini[287307]:         }
Oct 01 13:53:58 compute-0 clever_panini[287307]:     ]
Oct 01 13:53:58 compute-0 clever_panini[287307]: }
Oct 01 13:53:58 compute-0 systemd[1]: libpod-627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067.scope: Deactivated successfully.
Oct 01 13:53:58 compute-0 podman[287291]: 2025-10-01 13:53:58.276938735 +0000 UTC m=+0.934997121 container died 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-98fdbff4ad5d798a0eaa070502088da028b1fe242baf268a3a460a0d0a711db1-merged.mount: Deactivated successfully.
Oct 01 13:53:58 compute-0 podman[287291]: 2025-10-01 13:53:58.338928694 +0000 UTC m=+0.996987040 container remove 627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:53:58 compute-0 systemd[1]: libpod-conmon-627ac44942b094affb8800c861fc69c883c9c7a7b8aabcfac2f6b45ad9324067.scope: Deactivated successfully.
Oct 01 13:53:58 compute-0 sudo[287187]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:58 compute-0 sudo[287330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:58 compute-0 sudo[287330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:58 compute-0 sudo[287330]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:58 compute-0 sudo[287355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:53:58 compute-0 sudo[287355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:58 compute-0 sudo[287355]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:58 compute-0 sudo[287380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:53:58 compute-0 sudo[287380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:58 compute-0 sudo[287380]: pam_unix(sudo:session): session closed for user root
Oct 01 13:53:58 compute-0 sudo[287405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:53:58 compute-0 sudo[287405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:53:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:53:59 compute-0 podman[287471]: 2025-10-01 13:53:59.225451963 +0000 UTC m=+0.065685928 container create 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:53:59 compute-0 systemd[1]: Started libpod-conmon-723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425.scope.
Oct 01 13:53:59 compute-0 podman[287471]: 2025-10-01 13:53:59.194924914 +0000 UTC m=+0.035158929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:53:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:53:59 compute-0 podman[287471]: 2025-10-01 13:53:59.33203952 +0000 UTC m=+0.172273525 container init 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:53:59 compute-0 podman[287471]: 2025-10-01 13:53:59.343592858 +0000 UTC m=+0.183826823 container start 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:53:59 compute-0 thirsty_meitner[287487]: 167 167
Oct 01 13:53:59 compute-0 systemd[1]: libpod-723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425.scope: Deactivated successfully.
Oct 01 13:53:59 compute-0 conmon[287487]: conmon 723443c7c776976f72f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425.scope/container/memory.events
Oct 01 13:53:59 compute-0 podman[287471]: 2025-10-01 13:53:59.360380871 +0000 UTC m=+0.200614896 container attach 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:53:59 compute-0 podman[287471]: 2025-10-01 13:53:59.361026501 +0000 UTC m=+0.201260426 container died 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:53:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-15aa125f70a38bdb19d3a021fb538692b7c06eb0c6c4b038555965fe1671345e-merged.mount: Deactivated successfully.
Oct 01 13:53:59 compute-0 podman[287471]: 2025-10-01 13:53:59.404670318 +0000 UTC m=+0.244904273 container remove 723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:53:59 compute-0 systemd[1]: libpod-conmon-723443c7c776976f72f69d65e6a3d6c0b2030bd063b20564a09722fa194a1425.scope: Deactivated successfully.
Oct 01 13:53:59 compute-0 podman[287511]: 2025-10-01 13:53:59.62694022 +0000 UTC m=+0.070458640 container create 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:53:59 compute-0 systemd[1]: Started libpod-conmon-8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779.scope.
Oct 01 13:53:59 compute-0 podman[287511]: 2025-10-01 13:53:59.598559388 +0000 UTC m=+0.042077878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:53:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:53:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:53:59 compute-0 podman[287511]: 2025-10-01 13:53:59.739224148 +0000 UTC m=+0.182742628 container init 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 13:53:59 compute-0 podman[287511]: 2025-10-01 13:53:59.75345337 +0000 UTC m=+0.196971800 container start 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:53:59 compute-0 podman[287511]: 2025-10-01 13:53:59.757654854 +0000 UTC m=+0.201173284 container attach 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 13:54:00 compute-0 ceph-mon[74802]: pgmap v1524: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]: {
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "osd_id": 0,
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "type": "bluestore"
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:     },
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "osd_id": 2,
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "type": "bluestore"
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:     },
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "osd_id": 1,
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:         "type": "bluestore"
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]:     }
Oct 01 13:54:00 compute-0 optimistic_albattani[287528]: }
Oct 01 13:54:00 compute-0 systemd[1]: libpod-8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779.scope: Deactivated successfully.
Oct 01 13:54:00 compute-0 podman[287511]: 2025-10-01 13:54:00.913328375 +0000 UTC m=+1.356846795 container died 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:54:00 compute-0 systemd[1]: libpod-8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779.scope: Consumed 1.164s CPU time.
Oct 01 13:54:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9a7f14b519656384e4689ea53fee3da27a7d3f56c51c1944652bea41db7e806-merged.mount: Deactivated successfully.
Oct 01 13:54:00 compute-0 podman[287511]: 2025-10-01 13:54:00.997124948 +0000 UTC m=+1.440643368 container remove 8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:54:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:01 compute-0 systemd[1]: libpod-conmon-8724cff83ca69b78f3778ce41288182d9fb2f24bf791fb314483ba95224d2779.scope: Deactivated successfully.
Oct 01 13:54:01 compute-0 sudo[287405]: pam_unix(sudo:session): session closed for user root
Oct 01 13:54:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:54:01 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:54:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:54:01 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:54:01 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a3e09655-3758-4374-af6c-ac7500b3c1a8 does not exist
Oct 01 13:54:01 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 72b3a531-36f3-4d4b-919c-26d1c93788ea does not exist
Oct 01 13:54:01 compute-0 sudo[287572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:54:01 compute-0 sudo[287572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:54:01 compute-0 sudo[287572]: pam_unix(sudo:session): session closed for user root
Oct 01 13:54:01 compute-0 sudo[287597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:54:01 compute-0 sudo[287597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:54:01 compute-0 sudo[287597]: pam_unix(sudo:session): session closed for user root
Oct 01 13:54:02 compute-0 ceph-mon[74802]: pgmap v1525: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:02 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:54:02 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:54:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:04 compute-0 ceph-mon[74802]: pgmap v1526: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:04 compute-0 nova_compute[260022]: 2025-10-01 13:54:04.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:06 compute-0 ceph-mon[74802]: pgmap v1527: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:08 compute-0 ceph-mon[74802]: pgmap v1528: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:09 compute-0 podman[287623]: 2025-10-01 13:54:09.538020875 +0000 UTC m=+0.078365511 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:54:09 compute-0 podman[287625]: 2025-10-01 13:54:09.560709416 +0000 UTC m=+0.081478020 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:54:09 compute-0 podman[287624]: 2025-10-01 13:54:09.561769761 +0000 UTC m=+0.096789148 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:54:09 compute-0 podman[287622]: 2025-10-01 13:54:09.584661467 +0000 UTC m=+0.121457350 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Oct 01 13:54:10 compute-0 ceph-mon[74802]: pgmap v1529: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:12 compute-0 ceph-mon[74802]: pgmap v1530: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:54:12.322 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:54:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:54:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:54:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:54:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:54:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.376 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:54:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:54:13 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1946964686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.803 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.985 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.986 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5094MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:54:13 compute-0 nova_compute[260022]: 2025-10-01 13:54:13.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.062 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.085 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.086 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.086 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.152 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:54:14 compute-0 ceph-mon[74802]: pgmap v1531: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:14 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1946964686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:54:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:54:14 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1486342018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.594 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.603 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.624 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.626 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:54:14 compute-0 nova_compute[260022]: 2025-10-01 13:54:14.627 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:54:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:15 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1486342018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:54:16 compute-0 ceph-mon[74802]: pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:17 compute-0 nova_compute[260022]: 2025-10-01 13:54:17.627 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:17 compute-0 nova_compute[260022]: 2025-10-01 13:54:17.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:17 compute-0 nova_compute[260022]: 2025-10-01 13:54:17.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:17 compute-0 nova_compute[260022]: 2025-10-01 13:54:17.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:17 compute-0 nova_compute[260022]: 2025-10-01 13:54:17.628 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:54:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:54:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:54:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:54:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:54:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:54:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:54:18 compute-0 ceph-mon[74802]: pgmap v1533: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:20 compute-0 ceph-mon[74802]: pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:22 compute-0 ceph-mon[74802]: pgmap v1535: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:22 compute-0 nova_compute[260022]: 2025-10-01 13:54:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:22 compute-0 nova_compute[260022]: 2025-10-01 13:54:22.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:54:22 compute-0 nova_compute[260022]: 2025-10-01 13:54:22.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:54:22 compute-0 nova_compute[260022]: 2025-10-01 13:54:22.375 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:54:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:23 compute-0 nova_compute[260022]: 2025-10-01 13:54:23.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:24 compute-0 ceph-mon[74802]: pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:25 compute-0 nova_compute[260022]: 2025-10-01 13:54:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:26 compute-0 ceph-mon[74802]: pgmap v1537: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:28 compute-0 ceph-mon[74802]: pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:29 compute-0 nova_compute[260022]: 2025-10-01 13:54:29.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:54:30 compute-0 ceph-mon[74802]: pgmap v1539: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:32 compute-0 ceph-mon[74802]: pgmap v1540: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:34 compute-0 ceph-mon[74802]: pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:36 compute-0 ceph-mon[74802]: pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:38 compute-0 ceph-mon[74802]: pgmap v1543: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:40 compute-0 ceph-mon[74802]: pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:40 compute-0 podman[287753]: 2025-10-01 13:54:40.528805736 +0000 UTC m=+0.058921413 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:54:40 compute-0 podman[287748]: 2025-10-01 13:54:40.531059658 +0000 UTC m=+0.074561470 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct 01 13:54:40 compute-0 podman[287749]: 2025-10-01 13:54:40.53268894 +0000 UTC m=+0.075727717 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS)
Oct 01 13:54:40 compute-0 podman[287747]: 2025-10-01 13:54:40.613926682 +0000 UTC m=+0.163782306 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, tcib_managed=true)
Oct 01 13:54:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:42 compute-0 ceph-mon[74802]: pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:43 compute-0 ceph-mon[74802]: pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:54:43.662 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:54:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:54:43.664 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:54:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:46 compute-0 ceph-mon[74802]: pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:47 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:54:47.666 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:54:47
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Oct 01 13:54:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:54:48 compute-0 ceph-mon[74802]: pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:54:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:54:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:50 compute-0 ceph-mon[74802]: pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:52 compute-0 ceph-mon[74802]: pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:54 compute-0 ceph-mon[74802]: pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:54:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3134383617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:54:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:54:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3134383617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:54:56 compute-0 ceph-mon[74802]: pgmap v1552: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3134383617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:54:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3134383617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:54:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:54:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:54:58 compute-0 ceph-mon[74802]: pgmap v1553: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:54:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:00 compute-0 ceph-mon[74802]: pgmap v1554: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:01 compute-0 sudo[287823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:55:01 compute-0 sudo[287823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:01 compute-0 sudo[287823]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:01 compute-0 sudo[287848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:55:01 compute-0 sudo[287848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:01 compute-0 sudo[287848]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:01 compute-0 sudo[287873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:55:01 compute-0 sudo[287873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:01 compute-0 sudo[287873]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:01 compute-0 sudo[287898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:55:01 compute-0 sudo[287898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:02 compute-0 ceph-mon[74802]: pgmap v1555: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:02 compute-0 sudo[287898]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:55:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:55:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:55:02 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:55:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:55:02 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:55:02 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 02b0803a-067b-46b3-aa3b-553dbd10cf5d does not exist
Oct 01 13:55:02 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 42c2fae2-4485-4776-935b-04729fbf8a24 does not exist
Oct 01 13:55:02 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev cc46bf8b-7217-4c72-a0e5-8722d1f255bd does not exist
Oct 01 13:55:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:55:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:55:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:55:02 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:55:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:55:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:55:02 compute-0 sudo[287955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:55:02 compute-0 sudo[287955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:02 compute-0 sudo[287955]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:02 compute-0 sudo[287980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:55:02 compute-0 sudo[287980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:02 compute-0 sudo[287980]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:02 compute-0 sudo[288005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:55:02 compute-0 sudo[288005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:02 compute-0 sudo[288005]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:02 compute-0 sudo[288030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:55:02 compute-0 sudo[288030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:03 compute-0 podman[288096]: 2025-10-01 13:55:03.008161324 +0000 UTC m=+0.048522857 container create 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:55:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:03 compute-0 systemd[1]: Started libpod-conmon-770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c.scope.
Oct 01 13:55:03 compute-0 podman[288096]: 2025-10-01 13:55:02.986855175 +0000 UTC m=+0.027216668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:55:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:55:03 compute-0 podman[288096]: 2025-10-01 13:55:03.103052367 +0000 UTC m=+0.143413870 container init 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 13:55:03 compute-0 podman[288096]: 2025-10-01 13:55:03.111946911 +0000 UTC m=+0.152308404 container start 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:55:03 compute-0 podman[288096]: 2025-10-01 13:55:03.115975999 +0000 UTC m=+0.156337512 container attach 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:55:03 compute-0 great_grothendieck[288113]: 167 167
Oct 01 13:55:03 compute-0 systemd[1]: libpod-770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c.scope: Deactivated successfully.
Oct 01 13:55:03 compute-0 conmon[288113]: conmon 770cc08890193b194482 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c.scope/container/memory.events
Oct 01 13:55:03 compute-0 podman[288096]: 2025-10-01 13:55:03.11976556 +0000 UTC m=+0.160127083 container died 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct 01 13:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e6adf3597cc3127dff65741f3c45b600a5d6169ae72a9bd4d745d44e30ab1a6-merged.mount: Deactivated successfully.
Oct 01 13:55:03 compute-0 podman[288096]: 2025-10-01 13:55:03.156435998 +0000 UTC m=+0.196797491 container remove 770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:55:03 compute-0 systemd[1]: libpod-conmon-770cc08890193b194482a453e78bc013b84f4f818f716586182b9883b4f9ca3c.scope: Deactivated successfully.
Oct 01 13:55:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:55:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:55:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:55:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:55:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:55:03 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:55:03 compute-0 podman[288137]: 2025-10-01 13:55:03.34513447 +0000 UTC m=+0.054041852 container create 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:55:03 compute-0 systemd[1]: Started libpod-conmon-6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67.scope.
Oct 01 13:55:03 compute-0 podman[288137]: 2025-10-01 13:55:03.32347889 +0000 UTC m=+0.032386302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:55:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:03 compute-0 podman[288137]: 2025-10-01 13:55:03.459900517 +0000 UTC m=+0.168807989 container init 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 01 13:55:03 compute-0 podman[288137]: 2025-10-01 13:55:03.468687417 +0000 UTC m=+0.177594799 container start 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:55:03 compute-0 podman[288137]: 2025-10-01 13:55:03.472468028 +0000 UTC m=+0.181375510 container attach 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:55:04 compute-0 ceph-mon[74802]: pgmap v1556: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:04 compute-0 nova_compute[260022]: 2025-10-01 13:55:04.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:04 compute-0 silly_elgamal[288154]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:55:04 compute-0 silly_elgamal[288154]: --> relative data size: 1.0
Oct 01 13:55:04 compute-0 silly_elgamal[288154]: --> All data devices are unavailable
Oct 01 13:55:04 compute-0 systemd[1]: libpod-6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67.scope: Deactivated successfully.
Oct 01 13:55:04 compute-0 podman[288137]: 2025-10-01 13:55:04.619980719 +0000 UTC m=+1.328888121 container died 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:55:04 compute-0 systemd[1]: libpod-6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67.scope: Consumed 1.107s CPU time.
Oct 01 13:55:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbc44f70903f087ccfc05bb69127f70fae32fdfccf774c532f73a29c2ee56bda-merged.mount: Deactivated successfully.
Oct 01 13:55:04 compute-0 podman[288137]: 2025-10-01 13:55:04.695599529 +0000 UTC m=+1.404506941 container remove 6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:55:04 compute-0 systemd[1]: libpod-conmon-6d158e74b80e92273d14219ffcdb5bb3fea88cca064e14fb4a191ca9e5f81e67.scope: Deactivated successfully.
Oct 01 13:55:04 compute-0 sudo[288030]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:04 compute-0 sudo[288195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:55:04 compute-0 sudo[288195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:04 compute-0 sudo[288195]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:04 compute-0 sudo[288220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:55:04 compute-0 sudo[288220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:04 compute-0 sudo[288220]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:05 compute-0 sudo[288245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:55:05 compute-0 sudo[288245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:05 compute-0 sudo[288245]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:05 compute-0 sudo[288270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:55:05 compute-0 sudo[288270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:05 compute-0 podman[288335]: 2025-10-01 13:55:05.562865301 +0000 UTC m=+0.065024433 container create 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 13:55:05 compute-0 systemd[1]: Started libpod-conmon-67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b.scope.
Oct 01 13:55:05 compute-0 podman[288335]: 2025-10-01 13:55:05.533923059 +0000 UTC m=+0.036082231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:55:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:55:05 compute-0 podman[288335]: 2025-10-01 13:55:05.64944758 +0000 UTC m=+0.151606742 container init 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:55:05 compute-0 podman[288335]: 2025-10-01 13:55:05.657875678 +0000 UTC m=+0.160034800 container start 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:55:05 compute-0 podman[288335]: 2025-10-01 13:55:05.661722831 +0000 UTC m=+0.163881943 container attach 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:55:05 compute-0 optimistic_lalande[288351]: 167 167
Oct 01 13:55:05 compute-0 systemd[1]: libpod-67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b.scope: Deactivated successfully.
Oct 01 13:55:05 compute-0 podman[288335]: 2025-10-01 13:55:05.664912483 +0000 UTC m=+0.167071965 container died 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e1e71e34d25b5015ac1e7f989bdd5a6d959b3fd18ea5592c706ed2bd4a8b65-merged.mount: Deactivated successfully.
Oct 01 13:55:05 compute-0 podman[288335]: 2025-10-01 13:55:05.713032636 +0000 UTC m=+0.215191748 container remove 67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:55:05 compute-0 systemd[1]: libpod-conmon-67ff30fba13a8ae3dd37b7ff9a0f9806cc7fcd8300675131d0a37f360c4a341b.scope: Deactivated successfully.
Oct 01 13:55:05 compute-0 podman[288374]: 2025-10-01 13:55:05.933070146 +0000 UTC m=+0.048174116 container create d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:55:05 compute-0 systemd[1]: Started libpod-conmon-d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3.scope.
Oct 01 13:55:06 compute-0 podman[288374]: 2025-10-01 13:55:05.907293215 +0000 UTC m=+0.022397215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:55:06 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:06 compute-0 podman[288374]: 2025-10-01 13:55:06.045161698 +0000 UTC m=+0.160265738 container init d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:55:06 compute-0 podman[288374]: 2025-10-01 13:55:06.057407498 +0000 UTC m=+0.172511428 container start d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 13:55:06 compute-0 podman[288374]: 2025-10-01 13:55:06.061430176 +0000 UTC m=+0.176534206 container attach d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:55:06 compute-0 ceph-mon[74802]: pgmap v1557: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:06 compute-0 practical_gates[288390]: {
Oct 01 13:55:06 compute-0 practical_gates[288390]:     "0": [
Oct 01 13:55:06 compute-0 practical_gates[288390]:         {
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "devices": [
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "/dev/loop3"
Oct 01 13:55:06 compute-0 practical_gates[288390]:             ],
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_name": "ceph_lv0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_size": "21470642176",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "name": "ceph_lv0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "tags": {
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.cluster_name": "ceph",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.crush_device_class": "",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.encrypted": "0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.osd_id": "0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.type": "block",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.vdo": "0"
Oct 01 13:55:06 compute-0 practical_gates[288390]:             },
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "type": "block",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "vg_name": "ceph_vg0"
Oct 01 13:55:06 compute-0 practical_gates[288390]:         }
Oct 01 13:55:06 compute-0 practical_gates[288390]:     ],
Oct 01 13:55:06 compute-0 practical_gates[288390]:     "1": [
Oct 01 13:55:06 compute-0 practical_gates[288390]:         {
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "devices": [
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "/dev/loop4"
Oct 01 13:55:06 compute-0 practical_gates[288390]:             ],
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_name": "ceph_lv1",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_size": "21470642176",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "name": "ceph_lv1",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "tags": {
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.cluster_name": "ceph",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.crush_device_class": "",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.encrypted": "0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.osd_id": "1",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.type": "block",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.vdo": "0"
Oct 01 13:55:06 compute-0 practical_gates[288390]:             },
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "type": "block",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "vg_name": "ceph_vg1"
Oct 01 13:55:06 compute-0 practical_gates[288390]:         }
Oct 01 13:55:06 compute-0 practical_gates[288390]:     ],
Oct 01 13:55:06 compute-0 practical_gates[288390]:     "2": [
Oct 01 13:55:06 compute-0 practical_gates[288390]:         {
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "devices": [
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "/dev/loop5"
Oct 01 13:55:06 compute-0 practical_gates[288390]:             ],
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_name": "ceph_lv2",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_size": "21470642176",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "name": "ceph_lv2",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "tags": {
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.cluster_name": "ceph",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.crush_device_class": "",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.encrypted": "0",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.osd_id": "2",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.type": "block",
Oct 01 13:55:06 compute-0 practical_gates[288390]:                 "ceph.vdo": "0"
Oct 01 13:55:06 compute-0 practical_gates[288390]:             },
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "type": "block",
Oct 01 13:55:06 compute-0 practical_gates[288390]:             "vg_name": "ceph_vg2"
Oct 01 13:55:06 compute-0 practical_gates[288390]:         }
Oct 01 13:55:06 compute-0 practical_gates[288390]:     ]
Oct 01 13:55:06 compute-0 practical_gates[288390]: }
Oct 01 13:55:06 compute-0 systemd[1]: libpod-d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3.scope: Deactivated successfully.
Oct 01 13:55:06 compute-0 podman[288374]: 2025-10-01 13:55:06.879068887 +0000 UTC m=+0.994172857 container died d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:55:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3764bbd1bef6015d70174e8e0f9b39a5ab415013337b6461d8f1da26e3740874-merged.mount: Deactivated successfully.
Oct 01 13:55:06 compute-0 podman[288374]: 2025-10-01 13:55:06.972109152 +0000 UTC m=+1.087213112 container remove d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:55:06 compute-0 systemd[1]: libpod-conmon-d0fbb16cc6f82fa4bbae55602f83685a8b47bed46268c86bb9dcf15e17d13ab3.scope: Deactivated successfully.
Oct 01 13:55:07 compute-0 sudo[288270]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:07 compute-0 sudo[288413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:55:07 compute-0 sudo[288413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:07 compute-0 sudo[288413]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:07 compute-0 sudo[288438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:55:07 compute-0 sudo[288438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:07 compute-0 sudo[288438]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:07 compute-0 sudo[288463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:55:07 compute-0 sudo[288463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:07 compute-0 sudo[288463]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:07 compute-0 sudo[288488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:55:07 compute-0 sudo[288488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:07 compute-0 podman[288550]: 2025-10-01 13:55:07.870381682 +0000 UTC m=+0.065670333 container create 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 13:55:07 compute-0 systemd[1]: Started libpod-conmon-8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec.scope.
Oct 01 13:55:07 compute-0 podman[288550]: 2025-10-01 13:55:07.843822026 +0000 UTC m=+0.039110727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:55:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:55:07 compute-0 podman[288550]: 2025-10-01 13:55:07.980598104 +0000 UTC m=+0.175886745 container init 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:55:07 compute-0 podman[288550]: 2025-10-01 13:55:07.991963717 +0000 UTC m=+0.187252347 container start 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:55:07 compute-0 podman[288550]: 2025-10-01 13:55:07.996055846 +0000 UTC m=+0.191344497 container attach 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:55:07 compute-0 musing_bassi[288566]: 167 167
Oct 01 13:55:07 compute-0 systemd[1]: libpod-8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec.scope: Deactivated successfully.
Oct 01 13:55:08 compute-0 podman[288550]: 2025-10-01 13:55:07.999936341 +0000 UTC m=+0.195224992 container died 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 13:55:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf9370bf2da6b2612562df555a88f54020db058eea3fa288d0733971f7c6bbb4-merged.mount: Deactivated successfully.
Oct 01 13:55:08 compute-0 podman[288550]: 2025-10-01 13:55:08.052121243 +0000 UTC m=+0.247409894 container remove 8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bassi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:55:08 compute-0 systemd[1]: libpod-conmon-8796cdf9e4e7990f93e6f87663f1500f62ac195ee58fe4a928a36f85a4c360ec.scope: Deactivated successfully.
Oct 01 13:55:08 compute-0 ceph-mon[74802]: pgmap v1558: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:08 compute-0 podman[288590]: 2025-10-01 13:55:08.313815921 +0000 UTC m=+0.078524813 container create b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 13:55:08 compute-0 systemd[1]: Started libpod-conmon-b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014.scope.
Oct 01 13:55:08 compute-0 podman[288590]: 2025-10-01 13:55:08.282905077 +0000 UTC m=+0.047614019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:55:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:55:08 compute-0 podman[288590]: 2025-10-01 13:55:08.426003335 +0000 UTC m=+0.190712277 container init b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:55:08 compute-0 podman[288590]: 2025-10-01 13:55:08.436478209 +0000 UTC m=+0.201187071 container start b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 13:55:08 compute-0 podman[288590]: 2025-10-01 13:55:08.442187761 +0000 UTC m=+0.206896703 container attach b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 13:55:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]: {
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "osd_id": 0,
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "type": "bluestore"
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:     },
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "osd_id": 2,
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "type": "bluestore"
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:     },
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "osd_id": 1,
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:         "type": "bluestore"
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]:     }
Oct 01 13:55:09 compute-0 practical_heisenberg[288607]: }
Oct 01 13:55:09 compute-0 systemd[1]: libpod-b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014.scope: Deactivated successfully.
Oct 01 13:55:09 compute-0 systemd[1]: libpod-b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014.scope: Consumed 1.096s CPU time.
Oct 01 13:55:09 compute-0 podman[288590]: 2025-10-01 13:55:09.525789607 +0000 UTC m=+1.290498559 container died b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-abaadbe3b20f52506efb534cdf4eebf7dc14f1ff2765d98865701c656aecb8d7-merged.mount: Deactivated successfully.
Oct 01 13:55:09 compute-0 podman[288590]: 2025-10-01 13:55:09.598657548 +0000 UTC m=+1.363366410 container remove b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heisenberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:55:09 compute-0 systemd[1]: libpod-conmon-b8efe39df48fcf14a433183cad96219751c0292a1f119f36a69ee2ab27f40014.scope: Deactivated successfully.
Oct 01 13:55:09 compute-0 sudo[288488]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:55:09 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:55:09 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:55:09 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:55:09 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6c2c1a99-2775-401e-a802-85da88a793be does not exist
Oct 01 13:55:09 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2b90902a-b2fd-4bdd-ba94-db515baa9c59 does not exist
Oct 01 13:55:09 compute-0 sudo[288651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:55:09 compute-0 sudo[288651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:09 compute-0 sudo[288651]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:09 compute-0 sudo[288676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:55:09 compute-0 sudo[288676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:55:09 compute-0 sudo[288676]: pam_unix(sudo:session): session closed for user root
Oct 01 13:55:10 compute-0 ceph-mon[74802]: pgmap v1559: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:55:10 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:55:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct 01 13:55:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct 01 13:55:10 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct 01 13:55:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:11 compute-0 podman[288704]: 2025-10-01 13:55:11.56223401 +0000 UTC m=+0.092598991 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:55:11 compute-0 podman[288702]: 2025-10-01 13:55:11.563050537 +0000 UTC m=+0.104890704 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923, io.buildah.version=1.41.3)
Oct 01 13:55:11 compute-0 podman[288703]: 2025-10-01 13:55:11.585764701 +0000 UTC m=+0.124508509 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 01 13:55:11 compute-0 podman[288701]: 2025-10-01 13:55:11.602109981 +0000 UTC m=+0.148026437 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller)
Oct 01 13:55:11 compute-0 ceph-mon[74802]: osdmap e179: 3 total, 3 up, 3 in
Oct 01 13:55:11 compute-0 ceph-mon[74802]: pgmap v1561: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:55:12.322 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:55:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:55:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:55:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:55:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:55:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct 01 13:55:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct 01 13:55:12 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct 01 13:55:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Oct 01 13:55:13 compute-0 ceph-mon[74802]: osdmap e180: 3 total, 3 up, 3 in
Oct 01 13:55:13 compute-0 ceph-mon[74802]: pgmap v1563: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Oct 01 13:55:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct 01 13:55:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct 01 13:55:14 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct 01 13:55:14 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 13:55:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 KiB/s wr, 49 op/s
Oct 01 13:55:15 compute-0 nova_compute[260022]: 2025-10-01 13:55:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:15 compute-0 nova_compute[260022]: 2025-10-01 13:55:15.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:55:15 compute-0 nova_compute[260022]: 2025-10-01 13:55:15.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:55:15 compute-0 nova_compute[260022]: 2025-10-01 13:55:15.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:55:15 compute-0 nova_compute[260022]: 2025-10-01 13:55:15.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:55:15 compute-0 nova_compute[260022]: 2025-10-01 13:55:15.376 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:55:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct 01 13:55:15 compute-0 ceph-mon[74802]: osdmap e181: 3 total, 3 up, 3 in
Oct 01 13:55:15 compute-0 ceph-mon[74802]: pgmap v1565: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 KiB/s wr, 49 op/s
Oct 01 13:55:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct 01 13:55:15 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct 01 13:55:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:55:15 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858657067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:55:15 compute-0 nova_compute[260022]: 2025-10-01 13:55:15.813 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.019 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.020 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5078MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.021 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.021 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.153 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.170 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.171 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.171 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.343 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:55:16 compute-0 ceph-mon[74802]: osdmap e182: 3 total, 3 up, 3 in
Oct 01 13:55:16 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2858657067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:55:16 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:55:16 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337627271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.837 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.844 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.877 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.879 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.879 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.881 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.882 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.896 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 13:55:16 compute-0 nova_compute[260022]: 2025-10-01 13:55:16.896 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 6.3 KiB/s wr, 76 op/s
Oct 01 13:55:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:17 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2337627271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:55:17 compute-0 ceph-mon[74802]: pgmap v1567: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 6.3 KiB/s wr, 76 op/s
Oct 01 13:55:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:55:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:55:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:55:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:55:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:55:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:55:18 compute-0 nova_compute[260022]: 2025-10-01 13:55:18.944 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:18 compute-0 nova_compute[260022]: 2025-10-01 13:55:18.945 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:18 compute-0 nova_compute[260022]: 2025-10-01 13:55:18.945 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:18 compute-0 nova_compute[260022]: 2025-10-01 13:55:18.945 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:55:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 6.2 KiB/s wr, 103 op/s
Oct 01 13:55:19 compute-0 nova_compute[260022]: 2025-10-01 13:55:19.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:20 compute-0 ceph-mon[74802]: pgmap v1568: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 6.2 KiB/s wr, 103 op/s
Oct 01 13:55:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.9 KiB/s wr, 82 op/s
Oct 01 13:55:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct 01 13:55:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct 01 13:55:21 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct 01 13:55:22 compute-0 ceph-mon[74802]: pgmap v1569: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.9 KiB/s wr, 82 op/s
Oct 01 13:55:22 compute-0 ceph-mon[74802]: osdmap e183: 3 total, 3 up, 3 in
Oct 01 13:55:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct 01 13:55:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct 01 13:55:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct 01 13:55:22 compute-0 nova_compute[260022]: 2025-10-01 13:55:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:22 compute-0 nova_compute[260022]: 2025-10-01 13:55:22.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:55:22 compute-0 nova_compute[260022]: 2025-10-01 13:55:22.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:55:22 compute-0 nova_compute[260022]: 2025-10-01 13:55:22.374 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:55:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct 01 13:55:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct 01 13:55:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct 01 13:55:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 8.2 KiB/s wr, 142 op/s
Oct 01 13:55:23 compute-0 ceph-mon[74802]: osdmap e184: 3 total, 3 up, 3 in
Oct 01 13:55:23 compute-0 ceph-mon[74802]: osdmap e185: 3 total, 3 up, 3 in
Oct 01 13:55:24 compute-0 ceph-mon[74802]: pgmap v1573: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 8.2 KiB/s wr, 142 op/s
Oct 01 13:55:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.5 KiB/s wr, 59 op/s
Oct 01 13:55:25 compute-0 nova_compute[260022]: 2025-10-01 13:55:25.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:25 compute-0 nova_compute[260022]: 2025-10-01 13:55:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:25 compute-0 nova_compute[260022]: 2025-10-01 13:55:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:55:25 compute-0 nova_compute[260022]: 2025-10-01 13:55:25.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 13:55:26 compute-0 ceph-mon[74802]: pgmap v1574: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.5 KiB/s wr, 59 op/s
Oct 01 13:55:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.2 KiB/s wr, 66 op/s
Oct 01 13:55:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct 01 13:55:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct 01 13:55:27 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct 01 13:55:28 compute-0 ceph-mon[74802]: pgmap v1575: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.2 KiB/s wr, 66 op/s
Oct 01 13:55:28 compute-0 ceph-mon[74802]: osdmap e186: 3 total, 3 up, 3 in
Oct 01 13:55:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 50 op/s
Oct 01 13:55:30 compute-0 ceph-mon[74802]: pgmap v1577: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 50 op/s
Oct 01 13:55:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Oct 01 13:55:32 compute-0 ceph-mon[74802]: pgmap v1578: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Oct 01 13:55:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 409 B/s wr, 4 op/s
Oct 01 13:55:34 compute-0 ceph-mon[74802]: pgmap v1579: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 409 B/s wr, 4 op/s
Oct 01 13:55:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 409 B/s wr, 4 op/s
Oct 01 13:55:36 compute-0 ceph-mon[74802]: pgmap v1580: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 409 B/s wr, 4 op/s
Oct 01 13:55:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:55:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:38 compute-0 ceph-mon[74802]: pgmap v1581: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Oct 01 13:55:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:40 compute-0 ceph-mon[74802]: pgmap v1582: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:42 compute-0 ceph-mon[74802]: pgmap v1583: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:42 compute-0 podman[288831]: 2025-10-01 13:55:42.514496615 +0000 UTC m=+0.064907289 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 01 13:55:42 compute-0 podman[288829]: 2025-10-01 13:55:42.529344199 +0000 UTC m=+0.085056812 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 13:55:42 compute-0 podman[288832]: 2025-10-01 13:55:42.529434722 +0000 UTC m=+0.074103623 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 13:55:42 compute-0 podman[288830]: 2025-10-01 13:55:42.529591147 +0000 UTC m=+0.082876222 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:55:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:55:43.834 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:55:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:55:43.835 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:55:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:55:43.846 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:55:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:55:43.847 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:55:44 compute-0 ceph-mon[74802]: pgmap v1584: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:46 compute-0 ceph-mon[74802]: pgmap v1585: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:55:47
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta']
Oct 01 13:55:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:55:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:55:48 compute-0 ceph-mon[74802]: pgmap v1586: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:49 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:55:49.848 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:55:50 compute-0 ceph-mon[74802]: pgmap v1587: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:51 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:55:51.837 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:55:52 compute-0 ceph-mon[74802]: pgmap v1588: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:54 compute-0 ceph-mon[74802]: pgmap v1589: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:55:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2584635700' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:55:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:55:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2584635700' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:55:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2584635700' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:55:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2584635700' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:55:56 compute-0 ceph-mon[74802]: pgmap v1590: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:55:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:55:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:55:58 compute-0 ceph-mon[74802]: pgmap v1591: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:55:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:00 compute-0 ceph-mon[74802]: pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:02 compute-0 ceph-mon[74802]: pgmap v1593: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:03 compute-0 ceph-mon[74802]: pgmap v1594: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:05 compute-0 nova_compute[260022]: 2025-10-01 13:56:05.363 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:06 compute-0 ceph-mon[74802]: pgmap v1595: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:07 compute-0 ceph-mon[74802]: pgmap v1596: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:09 compute-0 sudo[288911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:56:09 compute-0 sudo[288911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:09 compute-0 sudo[288911]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:10 compute-0 sudo[288936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:56:10 compute-0 sudo[288936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:10 compute-0 sudo[288936]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:10 compute-0 sudo[288961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:56:10 compute-0 sudo[288961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:10 compute-0 sudo[288961]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:10 compute-0 ceph-mon[74802]: pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:10 compute-0 sudo[288986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:56:10 compute-0 sudo[288986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:10 compute-0 sudo[288986]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:56:10 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:56:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:56:10 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:56:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:56:10 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:56:10 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ecf7dcc2-31fc-41b6-bda7-37f915ff8517 does not exist
Oct 01 13:56:10 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 418564d9-7474-45d6-94d1-20efdc194341 does not exist
Oct 01 13:56:10 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9beb9b0f-6891-4835-92e8-d0595faf9297 does not exist
Oct 01 13:56:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:56:10 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:56:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:56:10 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:56:10 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:56:10 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:56:11 compute-0 sudo[289043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:56:11 compute-0 sudo[289043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:11 compute-0 sudo[289043]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:11 compute-0 sudo[289068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:56:11 compute-0 sudo[289068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:11 compute-0 sudo[289068]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:56:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:56:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:56:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:56:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:56:11 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:56:11 compute-0 sudo[289093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:56:11 compute-0 sudo[289093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:11 compute-0 sudo[289093]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:11 compute-0 sudo[289118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:56:11 compute-0 sudo[289118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:11 compute-0 podman[289182]: 2025-10-01 13:56:11.777158741 +0000 UTC m=+0.075455075 container create df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:56:11 compute-0 podman[289182]: 2025-10-01 13:56:11.73225254 +0000 UTC m=+0.030548894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:56:11 compute-0 systemd[1]: Started libpod-conmon-df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4.scope.
Oct 01 13:56:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:56:11 compute-0 podman[289182]: 2025-10-01 13:56:11.907654028 +0000 UTC m=+0.205950422 container init df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:56:11 compute-0 podman[289182]: 2025-10-01 13:56:11.917327147 +0000 UTC m=+0.215623511 container start df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 13:56:11 compute-0 frosty_maxwell[289198]: 167 167
Oct 01 13:56:11 compute-0 systemd[1]: libpod-df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4.scope: Deactivated successfully.
Oct 01 13:56:11 compute-0 podman[289182]: 2025-10-01 13:56:11.944571515 +0000 UTC m=+0.242867929 container attach df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:56:11 compute-0 podman[289182]: 2025-10-01 13:56:11.944995529 +0000 UTC m=+0.243291893 container died df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:56:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-045c3317014e1d9c3e046b2bca994ea3348307d86a2d702df6b60b5e53f741f1-merged.mount: Deactivated successfully.
Oct 01 13:56:12 compute-0 podman[289182]: 2025-10-01 13:56:12.125026084 +0000 UTC m=+0.423322448 container remove df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_maxwell, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:56:12 compute-0 systemd[1]: libpod-conmon-df46d6819d357f771c99051ba3af7e1b183aabf01986ab52f700177fcd4ce4a4.scope: Deactivated successfully.
Oct 01 13:56:12 compute-0 ceph-mon[74802]: pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:12 compute-0 podman[289225]: 2025-10-01 13:56:12.323256551 +0000 UTC m=+0.067472431 container create 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:56:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:56:12.323 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:56:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:56:12.324 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:56:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:56:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:56:12 compute-0 podman[289225]: 2025-10-01 13:56:12.285486807 +0000 UTC m=+0.029702667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:56:12 compute-0 systemd[1]: Started libpod-conmon-16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310.scope.
Oct 01 13:56:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:12 compute-0 podman[289225]: 2025-10-01 13:56:12.457200498 +0000 UTC m=+0.201416418 container init 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 13:56:12 compute-0 podman[289225]: 2025-10-01 13:56:12.470051477 +0000 UTC m=+0.214267357 container start 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:56:12 compute-0 podman[289225]: 2025-10-01 13:56:12.487181613 +0000 UTC m=+0.231397483 container attach 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:56:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:13 compute-0 podman[289267]: 2025-10-01 13:56:13.545237464 +0000 UTC m=+0.065624532 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:56:13 compute-0 podman[289266]: 2025-10-01 13:56:13.575691265 +0000 UTC m=+0.114827490 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:56:13 compute-0 podman[289265]: 2025-10-01 13:56:13.576124889 +0000 UTC m=+0.113229090 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 01 13:56:13 compute-0 podman[289263]: 2025-10-01 13:56:13.589999401 +0000 UTC m=+0.130928783 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923)
Oct 01 13:56:13 compute-0 nervous_newton[289241]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:56:13 compute-0 nervous_newton[289241]: --> relative data size: 1.0
Oct 01 13:56:13 compute-0 nervous_newton[289241]: --> All data devices are unavailable
Oct 01 13:56:13 compute-0 systemd[1]: libpod-16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310.scope: Deactivated successfully.
Oct 01 13:56:13 compute-0 systemd[1]: libpod-16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310.scope: Consumed 1.102s CPU time.
Oct 01 13:56:13 compute-0 podman[289225]: 2025-10-01 13:56:13.621643219 +0000 UTC m=+1.365859099 container died 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-10fc5750fbaf33217a80df81b53b121bb81ade819cd6ac92ddd57cb5bf126402-merged.mount: Deactivated successfully.
Oct 01 13:56:13 compute-0 podman[289225]: 2025-10-01 13:56:13.683695396 +0000 UTC m=+1.427911256 container remove 16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_newton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:56:13 compute-0 systemd[1]: libpod-conmon-16c3be32300bfb1811115cadd83e5e02f525b00a5c45db6a207ac63bea5e7310.scope: Deactivated successfully.
Oct 01 13:56:13 compute-0 sudo[289118]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:13 compute-0 sudo[289360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:56:13 compute-0 sudo[289360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:13 compute-0 sudo[289360]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:13 compute-0 sudo[289385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:56:13 compute-0 sudo[289385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:13 compute-0 sudo[289385]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:13 compute-0 sudo[289410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:56:14 compute-0 sudo[289410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:14 compute-0 sudo[289410]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:14 compute-0 sudo[289435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:56:14 compute-0 sudo[289435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:14 compute-0 ceph-mon[74802]: pgmap v1599: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:14 compute-0 podman[289499]: 2025-10-01 13:56:14.473948674 +0000 UTC m=+0.052152562 container create 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:56:14 compute-0 systemd[1]: Started libpod-conmon-0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba.scope.
Oct 01 13:56:14 compute-0 podman[289499]: 2025-10-01 13:56:14.446907263 +0000 UTC m=+0.025111231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:56:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:56:14 compute-0 podman[289499]: 2025-10-01 13:56:14.560946416 +0000 UTC m=+0.139150344 container init 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct 01 13:56:14 compute-0 podman[289499]: 2025-10-01 13:56:14.573246339 +0000 UTC m=+0.151450237 container start 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:56:14 compute-0 hungry_shamir[289515]: 167 167
Oct 01 13:56:14 compute-0 systemd[1]: libpod-0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba.scope: Deactivated successfully.
Oct 01 13:56:14 compute-0 podman[289499]: 2025-10-01 13:56:14.57798957 +0000 UTC m=+0.156193478 container attach 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:56:14 compute-0 podman[289499]: 2025-10-01 13:56:14.578703893 +0000 UTC m=+0.156907801 container died 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 13:56:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-98fc521704a34fe307a438a17994d3020c72e92bc601fe93e2b477720a62dc77-merged.mount: Deactivated successfully.
Oct 01 13:56:14 compute-0 podman[289499]: 2025-10-01 13:56:14.616448945 +0000 UTC m=+0.194652823 container remove 0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 13:56:14 compute-0 systemd[1]: libpod-conmon-0c69c8a73f8d998871c3814ab41ba398fe3916c8335a8943e11f3d48e422d2ba.scope: Deactivated successfully.
Oct 01 13:56:14 compute-0 podman[289539]: 2025-10-01 13:56:14.796398558 +0000 UTC m=+0.046441500 container create fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:56:14 compute-0 systemd[1]: Started libpod-conmon-fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23.scope.
Oct 01 13:56:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:14 compute-0 podman[289539]: 2025-10-01 13:56:14.77699794 +0000 UTC m=+0.027040862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:14 compute-0 podman[289539]: 2025-10-01 13:56:14.888651988 +0000 UTC m=+0.138694960 container init fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:56:14 compute-0 podman[289539]: 2025-10-01 13:56:14.899462212 +0000 UTC m=+0.149505114 container start fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:56:14 compute-0 podman[289539]: 2025-10-01 13:56:14.902760117 +0000 UTC m=+0.152803179 container attach fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:56:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:15 compute-0 eloquent_wright[289557]: {
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:     "0": [
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:         {
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "devices": [
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "/dev/loop3"
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             ],
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_name": "ceph_lv0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_size": "21470642176",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "name": "ceph_lv0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "tags": {
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.cluster_name": "ceph",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.crush_device_class": "",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.encrypted": "0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.osd_id": "0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.type": "block",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.vdo": "0"
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             },
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "type": "block",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "vg_name": "ceph_vg0"
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:         }
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:     ],
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:     "1": [
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:         {
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "devices": [
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "/dev/loop4"
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             ],
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_name": "ceph_lv1",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_size": "21470642176",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "name": "ceph_lv1",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "tags": {
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.cluster_name": "ceph",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.crush_device_class": "",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.encrypted": "0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.osd_id": "1",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.type": "block",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.vdo": "0"
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             },
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "type": "block",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "vg_name": "ceph_vg1"
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:         }
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:     ],
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:     "2": [
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:         {
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "devices": [
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "/dev/loop5"
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             ],
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_name": "ceph_lv2",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_size": "21470642176",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "name": "ceph_lv2",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "tags": {
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.cluster_name": "ceph",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.crush_device_class": "",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.encrypted": "0",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.osd_id": "2",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.type": "block",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:                 "ceph.vdo": "0"
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             },
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "type": "block",
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:             "vg_name": "ceph_vg2"
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:         }
Oct 01 13:56:15 compute-0 eloquent_wright[289557]:     ]
Oct 01 13:56:15 compute-0 eloquent_wright[289557]: }
Oct 01 13:56:15 compute-0 systemd[1]: libpod-fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23.scope: Deactivated successfully.
Oct 01 13:56:15 compute-0 podman[289539]: 2025-10-01 13:56:15.635292717 +0000 UTC m=+0.885335649 container died fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf59ad53f056f1dc5e38f37ef4580502aef4cad41956036629ea11f884ef6d8a-merged.mount: Deactivated successfully.
Oct 01 13:56:15 compute-0 podman[289539]: 2025-10-01 13:56:15.707416085 +0000 UTC m=+0.957458987 container remove fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wright, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:56:15 compute-0 systemd[1]: libpod-conmon-fafd01b41c8e03bf12a6af873de2ccf9cef090775f9b3ea42f319eaaaeebcf23.scope: Deactivated successfully.
Oct 01 13:56:15 compute-0 sudo[289435]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:15 compute-0 sudo[289577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:56:15 compute-0 sudo[289577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:15 compute-0 sudo[289577]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:15 compute-0 sudo[289602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:56:15 compute-0 sudo[289602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:15 compute-0 sudo[289602]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:15 compute-0 sudo[289627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:56:15 compute-0 sudo[289627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:15 compute-0 sudo[289627]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:16 compute-0 sudo[289652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:56:16 compute-0 sudo[289652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:16 compute-0 ceph-mon[74802]: pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:16 compute-0 nova_compute[260022]: 2025-10-01 13:56:16.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:16 compute-0 podman[289717]: 2025-10-01 13:56:16.447228806 +0000 UTC m=+0.063086730 container create c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:56:16 compute-0 systemd[1]: Started libpod-conmon-c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5.scope.
Oct 01 13:56:16 compute-0 podman[289717]: 2025-10-01 13:56:16.422318843 +0000 UTC m=+0.038176777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:56:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:56:16 compute-0 podman[289717]: 2025-10-01 13:56:16.548899316 +0000 UTC m=+0.164757260 container init c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:56:16 compute-0 podman[289717]: 2025-10-01 13:56:16.562387736 +0000 UTC m=+0.178245660 container start c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 13:56:16 compute-0 determined_kapitsa[289733]: 167 167
Oct 01 13:56:16 compute-0 systemd[1]: libpod-c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5.scope: Deactivated successfully.
Oct 01 13:56:16 compute-0 podman[289717]: 2025-10-01 13:56:16.568548252 +0000 UTC m=+0.184406156 container attach c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:56:16 compute-0 podman[289717]: 2025-10-01 13:56:16.568986486 +0000 UTC m=+0.184844400 container died c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f5617bbb45820dabec968bc5812e3d4ef1dbf753a0b23d0daf2402084fb9e7f-merged.mount: Deactivated successfully.
Oct 01 13:56:16 compute-0 podman[289717]: 2025-10-01 13:56:16.620073774 +0000 UTC m=+0.235931708 container remove c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 13:56:16 compute-0 systemd[1]: libpod-conmon-c90d75a053e5f1bbcc2489dadfadd7f77925928bcc185fdf30c183366b6f79d5.scope: Deactivated successfully.
Oct 01 13:56:16 compute-0 podman[289757]: 2025-10-01 13:56:16.87762224 +0000 UTC m=+0.067400148 container create 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:56:16 compute-0 systemd[1]: Started libpod-conmon-83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50.scope.
Oct 01 13:56:16 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:56:16 compute-0 podman[289757]: 2025-10-01 13:56:16.850437983 +0000 UTC m=+0.040215931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:56:16 compute-0 podman[289757]: 2025-10-01 13:56:16.970293023 +0000 UTC m=+0.160070971 container init 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:56:16 compute-0 podman[289757]: 2025-10-01 13:56:16.984310639 +0000 UTC m=+0.174088547 container start 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:56:16 compute-0 podman[289757]: 2025-10-01 13:56:16.988716299 +0000 UTC m=+0.178494207 container attach 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:56:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:56:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:56:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:56:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:56:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:56:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]: {
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "osd_id": 0,
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "type": "bluestore"
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:     },
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "osd_id": 2,
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "type": "bluestore"
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:     },
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "osd_id": 1,
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:         "type": "bluestore"
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]:     }
Oct 01 13:56:17 compute-0 sharp_kapitsa[289773]: }
Oct 01 13:56:17 compute-0 systemd[1]: libpod-83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50.scope: Deactivated successfully.
Oct 01 13:56:17 compute-0 podman[289757]: 2025-10-01 13:56:17.981338145 +0000 UTC m=+1.171116093 container died 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:56:17 compute-0 systemd[1]: libpod-83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50.scope: Consumed 1.009s CPU time.
Oct 01 13:56:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba1aefebef6ca789265be85ae1dc1f86179be2c83ceb40a7e4f45a64b5584d98-merged.mount: Deactivated successfully.
Oct 01 13:56:18 compute-0 podman[289757]: 2025-10-01 13:56:18.056420349 +0000 UTC m=+1.246198257 container remove 83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kapitsa, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:56:18 compute-0 systemd[1]: libpod-conmon-83fd72ee3457aee744d722946f5172e4733bee444c2eed8899e8c0227fb37e50.scope: Deactivated successfully.
Oct 01 13:56:18 compute-0 sudo[289652]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:56:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:56:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:56:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:56:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9d9d4284-f04b-4a8a-904c-6dcd5f4259d8 does not exist
Oct 01 13:56:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev db0eae49-eca7-4b64-a018-2e549bbdc49c does not exist
Oct 01 13:56:18 compute-0 sudo[289821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:56:18 compute-0 sudo[289821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:18 compute-0 sudo[289821]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:18 compute-0 sudo[289846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:56:18 compute-0 sudo[289846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:56:18 compute-0 sudo[289846]: pam_unix(sudo:session): session closed for user root
Oct 01 13:56:18 compute-0 ceph-mon[74802]: pgmap v1601: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:18 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:56:18 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:56:18 compute-0 nova_compute[260022]: 2025-10-01 13:56:18.902 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:56:18 compute-0 nova_compute[260022]: 2025-10-01 13:56:18.904 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:56:18 compute-0 nova_compute[260022]: 2025-10-01 13:56:18.904 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:56:18 compute-0 nova_compute[260022]: 2025-10-01 13:56:18.904 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:56:18 compute-0 nova_compute[260022]: 2025-10-01 13:56:18.905 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:56:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:56:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8267394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.317 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.473 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.475 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.476 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.477 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.583 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.602 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.603 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.603 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.762 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.847 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.848 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.863 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.881 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 13:56:19 compute-0 nova_compute[260022]: 2025-10-01 13:56:19.930 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:56:20 compute-0 ceph-mon[74802]: pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:20 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/8267394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:56:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:56:20 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866529483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:56:20 compute-0 nova_compute[260022]: 2025-10-01 13:56:20.353 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:56:20 compute-0 nova_compute[260022]: 2025-10-01 13:56:20.358 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:56:20 compute-0 nova_compute[260022]: 2025-10-01 13:56:20.408 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:56:20 compute-0 nova_compute[260022]: 2025-10-01 13:56:20.411 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:56:20 compute-0 nova_compute[260022]: 2025-10-01 13:56:20.411 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:56:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:21 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2866529483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:56:22 compute-0 nova_compute[260022]: 2025-10-01 13:56:22.414 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:22 compute-0 nova_compute[260022]: 2025-10-01 13:56:22.414 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:22 compute-0 nova_compute[260022]: 2025-10-01 13:56:22.415 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:56:22 compute-0 nova_compute[260022]: 2025-10-01 13:56:22.415 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:56:22 compute-0 nova_compute[260022]: 2025-10-01 13:56:22.439 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:56:22 compute-0 nova_compute[260022]: 2025-10-01 13:56:22.439 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:22 compute-0 nova_compute[260022]: 2025-10-01 13:56:22.439 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:22 compute-0 nova_compute[260022]: 2025-10-01 13:56:22.440 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:22 compute-0 nova_compute[260022]: 2025-10-01 13:56:22.440 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:56:22 compute-0 ceph-mon[74802]: pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:23 compute-0 ceph-mon[74802]: pgmap v1604: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:25 compute-0 nova_compute[260022]: 2025-10-01 13:56:25.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:25 compute-0 nova_compute[260022]: 2025-10-01 13:56:25.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:26 compute-0 ceph-mon[74802]: pgmap v1605: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:28 compute-0 ceph-mon[74802]: pgmap v1606: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:29 compute-0 nova_compute[260022]: 2025-10-01 13:56:29.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:56:30 compute-0 ceph-mon[74802]: pgmap v1607: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:32 compute-0 ceph-mon[74802]: pgmap v1608: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:34 compute-0 ceph-mon[74802]: pgmap v1609: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:36 compute-0 ceph-mon[74802]: pgmap v1610: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:37 compute-0 ceph-mon[74802]: pgmap v1611: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:40 compute-0 ceph-mon[74802]: pgmap v1612: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:42 compute-0 ceph-mon[74802]: pgmap v1613: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:56:43.949 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:56:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:56:43.951 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:56:44 compute-0 ceph-mon[74802]: pgmap v1614: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.169476) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004169586, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2108, "num_deletes": 258, "total_data_size": 3448727, "memory_usage": 3505120, "flush_reason": "Manual Compaction"}
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004200067, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3380389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30458, "largest_seqno": 32565, "table_properties": {"data_size": 3370684, "index_size": 6199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19568, "raw_average_key_size": 20, "raw_value_size": 3351418, "raw_average_value_size": 3505, "num_data_blocks": 274, "num_entries": 956, "num_filter_entries": 956, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759326787, "oldest_key_time": 1759326787, "file_creation_time": 1759327004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 30667 microseconds, and 9462 cpu microseconds.
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.200156) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3380389 bytes OK
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.200182) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.201823) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.201840) EVENT_LOG_v1 {"time_micros": 1759327004201835, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.201863) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3439869, prev total WAL file size 3439869, number of live WAL files 2.
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.202990) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3301KB)], [68(7446KB)]
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004203050, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11005967, "oldest_snapshot_seqno": -1}
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5488 keys, 9255872 bytes, temperature: kUnknown
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004272831, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9255872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9217310, "index_size": 23732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 137595, "raw_average_key_size": 25, "raw_value_size": 9116280, "raw_average_value_size": 1661, "num_data_blocks": 973, "num_entries": 5488, "num_filter_entries": 5488, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.273207) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9255872 bytes
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.274509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.5 rd, 132.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 6018, records dropped: 530 output_compression: NoCompression
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.274541) EVENT_LOG_v1 {"time_micros": 1759327004274524, "job": 38, "event": "compaction_finished", "compaction_time_micros": 69877, "compaction_time_cpu_micros": 41478, "output_level": 6, "num_output_files": 1, "total_output_size": 9255872, "num_input_records": 6018, "num_output_records": 5488, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004276048, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327004279087, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.202890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:56:44 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:56:44.279362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:56:44 compute-0 podman[289918]: 2025-10-01 13:56:44.525672763 +0000 UTC m=+0.067134150 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:56:44 compute-0 podman[289917]: 2025-10-01 13:56:44.528990549 +0000 UTC m=+0.072586344 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 01 13:56:44 compute-0 podman[289916]: 2025-10-01 13:56:44.532667215 +0000 UTC m=+0.080464214 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 13:56:44 compute-0 podman[289915]: 2025-10-01 13:56:44.561415081 +0000 UTC m=+0.108896380 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:56:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:46 compute-0 ceph-mon[74802]: pgmap v1615: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:56:47
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.data']
Oct 01 13:56:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:56:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:56:48 compute-0 ceph-mon[74802]: pgmap v1616: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:50 compute-0 ceph-mon[74802]: pgmap v1617: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:51 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:56:51.953 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:56:52 compute-0 ceph-mon[74802]: pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:54 compute-0 ceph-mon[74802]: pgmap v1619: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:56:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1765015405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:56:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:56:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1765015405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:56:56 compute-0 ceph-mon[74802]: pgmap v1620: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1765015405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:56:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1765015405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:56:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:56:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:56:58 compute-0 ceph-mon[74802]: pgmap v1621: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:56:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:00 compute-0 ceph-mon[74802]: pgmap v1622: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:02 compute-0 ceph-mon[74802]: pgmap v1623: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:04 compute-0 ceph-mon[74802]: pgmap v1624: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:05 compute-0 ceph-mon[74802]: pgmap v1625: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:07 compute-0 nova_compute[260022]: 2025-10-01 13:57:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:57:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:08 compute-0 ceph-mon[74802]: pgmap v1626: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:10 compute-0 ceph-mon[74802]: pgmap v1627: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:12 compute-0 ceph-mon[74802]: pgmap v1628: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:12.324 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:57:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:12.324 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:57:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:57:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:14 compute-0 ceph-mon[74802]: pgmap v1629: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:15 compute-0 podman[290000]: 2025-10-01 13:57:15.512674933 +0000 UTC m=+0.067196202 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20250923, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:57:15 compute-0 podman[290010]: 2025-10-01 13:57:15.512933421 +0000 UTC m=+0.055758207 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:57:15 compute-0 podman[290005]: 2025-10-01 13:57:15.527751884 +0000 UTC m=+0.073642058 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, io.buildah.version=1.41.3)
Oct 01 13:57:15 compute-0 podman[289999]: 2025-10-01 13:57:15.551281334 +0000 UTC m=+0.112023160 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:57:16 compute-0 ceph-mon[74802]: pgmap v1630: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:57:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:57:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:57:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:57:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:57:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:57:18 compute-0 ceph-mon[74802]: pgmap v1631: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:18 compute-0 nova_compute[260022]: 2025-10-01 13:57:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:57:18 compute-0 nova_compute[260022]: 2025-10-01 13:57:18.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:57:18 compute-0 nova_compute[260022]: 2025-10-01 13:57:18.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:57:18 compute-0 sudo[290079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:57:18 compute-0 sudo[290079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:18 compute-0 nova_compute[260022]: 2025-10-01 13:57:18.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:57:18 compute-0 nova_compute[260022]: 2025-10-01 13:57:18.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:57:18 compute-0 nova_compute[260022]: 2025-10-01 13:57:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:57:18 compute-0 sudo[290079]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:18 compute-0 nova_compute[260022]: 2025-10-01 13:57:18.377 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:57:18 compute-0 nova_compute[260022]: 2025-10-01 13:57:18.378 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:57:18 compute-0 sudo[290105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:57:18 compute-0 sudo[290105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:18 compute-0 sudo[290105]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:18 compute-0 sudo[290130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:57:18 compute-0 sudo[290130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:18 compute-0 sudo[290130]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:18 compute-0 sudo[290174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:57:18 compute-0 sudo[290174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:57:18 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521539777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:57:18 compute-0 nova_compute[260022]: 2025-10-01 13:57:18.846 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.054 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.056 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5095MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.056 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.057 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:57:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:19 compute-0 sudo[290174]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.160 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.198 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.199 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.199 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:57:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:57:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:57:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:57:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:57:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 39828c43-0274-40f7-b3a5-861716ded07a does not exist
Oct 01 13:57:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9ce96aef-56b0-4bbc-b489-098a8c508f0d does not exist
Oct 01 13:57:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 31b0e9d3-ff67-4751-b1f2-67b8596c3c9a does not exist
Oct 01 13:57:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:57:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:57:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:57:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3521539777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:57:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:57:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.279 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:57:19 compute-0 sudo[290233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:57:19 compute-0 sudo[290233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:19 compute-0 sudo[290233]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:19 compute-0 sudo[290259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:57:19 compute-0 sudo[290259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:19 compute-0 sudo[290259]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:19 compute-0 sudo[290300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:57:19 compute-0 sudo[290300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:19 compute-0 sudo[290300]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:19 compute-0 sudo[290328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:57:19 compute-0 sudo[290328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:57:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966520485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.744 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.752 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.771 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.774 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:57:19 compute-0 nova_compute[260022]: 2025-10-01 13:57:19.775 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:57:19 compute-0 podman[290395]: 2025-10-01 13:57:19.938979553 +0000 UTC m=+0.058306349 container create abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:57:19 compute-0 systemd[1]: Started libpod-conmon-abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0.scope.
Oct 01 13:57:20 compute-0 podman[290395]: 2025-10-01 13:57:19.908696058 +0000 UTC m=+0.028022904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:57:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:57:20 compute-0 podman[290395]: 2025-10-01 13:57:20.034706083 +0000 UTC m=+0.154032889 container init abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:57:20 compute-0 podman[290395]: 2025-10-01 13:57:20.044199606 +0000 UTC m=+0.163526392 container start abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:57:20 compute-0 podman[290395]: 2025-10-01 13:57:20.047891873 +0000 UTC m=+0.167218659 container attach abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 13:57:20 compute-0 amazing_sutherland[290411]: 167 167
Oct 01 13:57:20 compute-0 systemd[1]: libpod-abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0.scope: Deactivated successfully.
Oct 01 13:57:20 compute-0 podman[290395]: 2025-10-01 13:57:20.053541893 +0000 UTC m=+0.172868649 container died abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:57:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9e4521c2fc648582d9843f9109315ceddc7132b64be57cdc56edc5f941369ef-merged.mount: Deactivated successfully.
Oct 01 13:57:20 compute-0 podman[290395]: 2025-10-01 13:57:20.098140435 +0000 UTC m=+0.217467211 container remove abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:57:20 compute-0 systemd[1]: libpod-conmon-abe32fc63a0990c960b30f02ab9f1819e2583762784923471a4ff9b7163a54c0.scope: Deactivated successfully.
Oct 01 13:57:20 compute-0 ceph-mon[74802]: pgmap v1632: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:20 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2966520485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:57:20 compute-0 podman[290437]: 2025-10-01 13:57:20.296710101 +0000 UTC m=+0.061327796 container create f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 13:57:20 compute-0 systemd[1]: Started libpod-conmon-f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94.scope.
Oct 01 13:57:20 compute-0 podman[290437]: 2025-10-01 13:57:20.269047089 +0000 UTC m=+0.033664864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:57:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:20 compute-0 podman[290437]: 2025-10-01 13:57:20.402709648 +0000 UTC m=+0.167327433 container init f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 13:57:20 compute-0 podman[290437]: 2025-10-01 13:57:20.41751669 +0000 UTC m=+0.182134415 container start f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:57:20 compute-0 podman[290437]: 2025-10-01 13:57:20.421680703 +0000 UTC m=+0.186298428 container attach f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct 01 13:57:20 compute-0 nova_compute[260022]: 2025-10-01 13:57:20.776 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:57:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:21 compute-0 nova_compute[260022]: 2025-10-01 13:57:21.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:57:21 compute-0 crazy_yonath[290454]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:57:21 compute-0 crazy_yonath[290454]: --> relative data size: 1.0
Oct 01 13:57:21 compute-0 crazy_yonath[290454]: --> All data devices are unavailable
Oct 01 13:57:21 compute-0 systemd[1]: libpod-f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94.scope: Deactivated successfully.
Oct 01 13:57:21 compute-0 podman[290437]: 2025-10-01 13:57:21.590490432 +0000 UTC m=+1.355108137 container died f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:57:21 compute-0 systemd[1]: libpod-f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94.scope: Consumed 1.117s CPU time.
Oct 01 13:57:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-433e2ffb1532a7a294ef288d4500fe24762df58fa650f2d282a8d845a52fc128-merged.mount: Deactivated successfully.
Oct 01 13:57:21 compute-0 podman[290437]: 2025-10-01 13:57:21.64406544 +0000 UTC m=+1.408683145 container remove f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:57:21 compute-0 systemd[1]: libpod-conmon-f88f29e5843d11d2d0430858015bb1632287138678f70171286eed806de2de94.scope: Deactivated successfully.
Oct 01 13:57:21 compute-0 sudo[290328]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:21 compute-0 sudo[290494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:57:21 compute-0 sudo[290494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:21 compute-0 sudo[290494]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:21 compute-0 sudo[290519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:57:21 compute-0 sudo[290519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:21 compute-0 sudo[290519]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:21 compute-0 sudo[290544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:57:21 compute-0 sudo[290544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:21 compute-0 sudo[290544]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:21 compute-0 sudo[290569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:57:21 compute-0 sudo[290569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:22 compute-0 ceph-mon[74802]: pgmap v1633: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:22 compute-0 nova_compute[260022]: 2025-10-01 13:57:22.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:57:22 compute-0 nova_compute[260022]: 2025-10-01 13:57:22.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:57:22 compute-0 nova_compute[260022]: 2025-10-01 13:57:22.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:57:22 compute-0 nova_compute[260022]: 2025-10-01 13:57:22.375 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:57:22 compute-0 nova_compute[260022]: 2025-10-01 13:57:22.376 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:57:22 compute-0 podman[290633]: 2025-10-01 13:57:22.407116482 +0000 UTC m=+0.060795729 container create 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 13:57:22 compute-0 systemd[1]: Started libpod-conmon-0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc.scope.
Oct 01 13:57:22 compute-0 podman[290633]: 2025-10-01 13:57:22.385655168 +0000 UTC m=+0.039334435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:57:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:57:22 compute-0 podman[290633]: 2025-10-01 13:57:22.514671209 +0000 UTC m=+0.168350526 container init 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 13:57:22 compute-0 podman[290633]: 2025-10-01 13:57:22.527112055 +0000 UTC m=+0.180791302 container start 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:57:22 compute-0 podman[290633]: 2025-10-01 13:57:22.531599158 +0000 UTC m=+0.185278475 container attach 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:57:22 compute-0 optimistic_hypatia[290650]: 167 167
Oct 01 13:57:22 compute-0 systemd[1]: libpod-0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc.scope: Deactivated successfully.
Oct 01 13:57:22 compute-0 podman[290633]: 2025-10-01 13:57:22.535400509 +0000 UTC m=+0.189079756 container died 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:57:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f461b5eaa30642aba0951546593080a2882abd27572bde96ac3c593450e6cebe-merged.mount: Deactivated successfully.
Oct 01 13:57:22 compute-0 podman[290633]: 2025-10-01 13:57:22.586793897 +0000 UTC m=+0.240473154 container remove 0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:57:22 compute-0 systemd[1]: libpod-conmon-0f2cdc206862dfcf2eb556b30aa25be79b11e15ab9be0a8c2f4e0d2ea4631bbc.scope: Deactivated successfully.
Oct 01 13:57:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:22 compute-0 podman[290674]: 2025-10-01 13:57:22.816931169 +0000 UTC m=+0.070026982 container create 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:57:22 compute-0 systemd[1]: Started libpod-conmon-180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7.scope.
Oct 01 13:57:22 compute-0 podman[290674]: 2025-10-01 13:57:22.788762161 +0000 UTC m=+0.041858064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:57:22 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:22 compute-0 podman[290674]: 2025-10-01 13:57:22.928128163 +0000 UTC m=+0.181224066 container init 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 01 13:57:22 compute-0 podman[290674]: 2025-10-01 13:57:22.94247219 +0000 UTC m=+0.195568013 container start 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:57:22 compute-0 podman[290674]: 2025-10-01 13:57:22.945826006 +0000 UTC m=+0.198921829 container attach 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:57:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:23 compute-0 youthful_villani[290690]: {
Oct 01 13:57:23 compute-0 youthful_villani[290690]:     "0": [
Oct 01 13:57:23 compute-0 youthful_villani[290690]:         {
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "devices": [
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "/dev/loop3"
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             ],
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_name": "ceph_lv0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_size": "21470642176",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "name": "ceph_lv0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "tags": {
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.cluster_name": "ceph",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.crush_device_class": "",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.encrypted": "0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.osd_id": "0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.type": "block",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.vdo": "0"
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             },
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "type": "block",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "vg_name": "ceph_vg0"
Oct 01 13:57:23 compute-0 youthful_villani[290690]:         }
Oct 01 13:57:23 compute-0 youthful_villani[290690]:     ],
Oct 01 13:57:23 compute-0 youthful_villani[290690]:     "1": [
Oct 01 13:57:23 compute-0 youthful_villani[290690]:         {
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "devices": [
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "/dev/loop4"
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             ],
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_name": "ceph_lv1",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_size": "21470642176",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "name": "ceph_lv1",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "tags": {
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.cluster_name": "ceph",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.crush_device_class": "",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.encrypted": "0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.osd_id": "1",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.type": "block",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.vdo": "0"
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             },
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "type": "block",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "vg_name": "ceph_vg1"
Oct 01 13:57:23 compute-0 youthful_villani[290690]:         }
Oct 01 13:57:23 compute-0 youthful_villani[290690]:     ],
Oct 01 13:57:23 compute-0 youthful_villani[290690]:     "2": [
Oct 01 13:57:23 compute-0 youthful_villani[290690]:         {
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "devices": [
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "/dev/loop5"
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             ],
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_name": "ceph_lv2",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_size": "21470642176",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "name": "ceph_lv2",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "tags": {
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.cluster_name": "ceph",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.crush_device_class": "",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.encrypted": "0",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.osd_id": "2",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.type": "block",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:                 "ceph.vdo": "0"
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             },
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "type": "block",
Oct 01 13:57:23 compute-0 youthful_villani[290690]:             "vg_name": "ceph_vg2"
Oct 01 13:57:23 compute-0 youthful_villani[290690]:         }
Oct 01 13:57:23 compute-0 youthful_villani[290690]:     ]
Oct 01 13:57:23 compute-0 youthful_villani[290690]: }
Oct 01 13:57:23 compute-0 systemd[1]: libpod-180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7.scope: Deactivated successfully.
Oct 01 13:57:23 compute-0 podman[290674]: 2025-10-01 13:57:23.736462477 +0000 UTC m=+0.989558330 container died 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c1e1f2149c1c7a736cc7fa095597859a2ca1d9c874f64e8cdb23bb91d87ba62-merged.mount: Deactivated successfully.
Oct 01 13:57:23 compute-0 podman[290674]: 2025-10-01 13:57:23.820855366 +0000 UTC m=+1.073951209 container remove 180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:57:23 compute-0 systemd[1]: libpod-conmon-180258ed3669c94e906dc14f45ed8d114b9a0d594ad9c28d093d0c55f7349da7.scope: Deactivated successfully.
Oct 01 13:57:23 compute-0 sudo[290569]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:23 compute-0 sudo[290714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:57:23 compute-0 sudo[290714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:23 compute-0 sudo[290714]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:24 compute-0 sudo[290739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:57:24 compute-0 sudo[290739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:24 compute-0 sudo[290739]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:24 compute-0 sudo[290764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:57:24 compute-0 sudo[290764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:24 compute-0 sudo[290764]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:24 compute-0 sudo[290789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:57:24 compute-0 sudo[290789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:24 compute-0 ceph-mon[74802]: pgmap v1634: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:24 compute-0 podman[290856]: 2025-10-01 13:57:24.703666513 +0000 UTC m=+0.066224680 container create b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 13:57:24 compute-0 systemd[1]: Started libpod-conmon-b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07.scope.
Oct 01 13:57:24 compute-0 podman[290856]: 2025-10-01 13:57:24.678002927 +0000 UTC m=+0.040561134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:57:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:57:24 compute-0 podman[290856]: 2025-10-01 13:57:24.809563578 +0000 UTC m=+0.172121795 container init b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 13:57:24 compute-0 podman[290856]: 2025-10-01 13:57:24.82280398 +0000 UTC m=+0.185362147 container start b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:57:24 compute-0 podman[290856]: 2025-10-01 13:57:24.826633092 +0000 UTC m=+0.189191259 container attach b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:57:24 compute-0 objective_neumann[290872]: 167 167
Oct 01 13:57:24 compute-0 systemd[1]: libpod-b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07.scope: Deactivated successfully.
Oct 01 13:57:24 compute-0 podman[290856]: 2025-10-01 13:57:24.830386761 +0000 UTC m=+0.192944918 container died b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e80a799d31ff9091e4616775c973b80eb161432dbcf0e0d9f9d8acbbfb19949f-merged.mount: Deactivated successfully.
Oct 01 13:57:24 compute-0 podman[290856]: 2025-10-01 13:57:24.887979117 +0000 UTC m=+0.250537284 container remove b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_neumann, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 13:57:24 compute-0 systemd[1]: libpod-conmon-b661595991a65bdc0baccf9642bc2955ee2177e91a6087ed835675371e10fc07.scope: Deactivated successfully.
Oct 01 13:57:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:25 compute-0 podman[290896]: 2025-10-01 13:57:25.15485162 +0000 UTC m=+0.076464727 container create 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:57:25 compute-0 systemd[1]: Started libpod-conmon-66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784.scope.
Oct 01 13:57:25 compute-0 podman[290896]: 2025-10-01 13:57:25.126439924 +0000 UTC m=+0.048053071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:57:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:57:25 compute-0 podman[290896]: 2025-10-01 13:57:25.325347372 +0000 UTC m=+0.246960529 container init 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:57:25 compute-0 podman[290896]: 2025-10-01 13:57:25.33750126 +0000 UTC m=+0.259114367 container start 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:57:25 compute-0 nova_compute[260022]: 2025-10-01 13:57:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:57:25 compute-0 podman[290896]: 2025-10-01 13:57:25.361495514 +0000 UTC m=+0.283108591 container attach 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:57:26 compute-0 ceph-mon[74802]: pgmap v1635: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:26 compute-0 nova_compute[260022]: 2025-10-01 13:57:26.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:57:26 compute-0 kind_euclid[290913]: {
Oct 01 13:57:26 compute-0 kind_euclid[290913]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "osd_id": 0,
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "type": "bluestore"
Oct 01 13:57:26 compute-0 kind_euclid[290913]:     },
Oct 01 13:57:26 compute-0 kind_euclid[290913]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "osd_id": 2,
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "type": "bluestore"
Oct 01 13:57:26 compute-0 kind_euclid[290913]:     },
Oct 01 13:57:26 compute-0 kind_euclid[290913]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "osd_id": 1,
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:57:26 compute-0 kind_euclid[290913]:         "type": "bluestore"
Oct 01 13:57:26 compute-0 kind_euclid[290913]:     }
Oct 01 13:57:26 compute-0 kind_euclid[290913]: }
Oct 01 13:57:26 compute-0 systemd[1]: libpod-66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784.scope: Deactivated successfully.
Oct 01 13:57:26 compute-0 podman[290896]: 2025-10-01 13:57:26.440775422 +0000 UTC m=+1.362388539 container died 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:57:26 compute-0 systemd[1]: libpod-66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784.scope: Consumed 1.068s CPU time.
Oct 01 13:57:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b992bb7f54d42e97e4fc991d2b3f9d551d3d5bd389de72f898b934318cdd354f-merged.mount: Deactivated successfully.
Oct 01 13:57:26 compute-0 podman[290896]: 2025-10-01 13:57:26.518192369 +0000 UTC m=+1.439805476 container remove 66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:57:26 compute-0 systemd[1]: libpod-conmon-66c76a536defcf40402e1678b26d6e8eda05427566c652578471eb9b7e0f1784.scope: Deactivated successfully.
Oct 01 13:57:26 compute-0 sudo[290789]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:57:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:57:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:57:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:57:26 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 41ca8277-13ce-4734-9a4c-618c747dfdd7 does not exist
Oct 01 13:57:26 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 755aa8c7-a409-40b2-a96a-bc7a6f13dd1b does not exist
Oct 01 13:57:26 compute-0 sudo[290960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:57:26 compute-0 sudo[290960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:26 compute-0 sudo[290960]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:26 compute-0 sudo[290985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:57:26 compute-0 sudo[290985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:57:26 compute-0 sudo[290985]: pam_unix(sudo:session): session closed for user root
Oct 01 13:57:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:57:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:57:27 compute-0 ceph-mon[74802]: pgmap v1636: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.258 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.262 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.265 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.267 161890 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpuzgco15p/privsep.sock']
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.996 161890 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.997 161890 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpuzgco15p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.861 291014 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.868 291014 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.871 291014 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 01 13:57:29 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:29.872 291014 INFO oslo.privsep.daemon [-] privsep daemon running as pid 291014
Oct 01 13:57:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:30.001 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[39f10355-3784-47db-b3a1-6f949329d476]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:57:30 compute-0 ceph-mon[74802]: pgmap v1637: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:30.938 291014 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:57:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:30.938 291014 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:57:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:30.938 291014 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:57:31 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:31.040 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[89c345c0-3604-44b8-bb69-cbef23c46725]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:57:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:32 compute-0 ceph-mon[74802]: pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:32 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:32.722 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:57:32 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:32.724 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated
Oct 01 13:57:32 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:32.726 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:57:32 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:32.728 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[fea7e3c5-4a73-4f67-9145-cf348fc27918]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:57:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:34 compute-0 ceph-mon[74802]: pgmap v1639: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:35.277 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:57:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:35.278 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated
Oct 01 13:57:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:35.279 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:57:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:35.280 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[4326b8c0-1de9-4e6e-a722-5ddaef919420]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:57:36 compute-0 ceph-mon[74802]: pgmap v1640: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:38 compute-0 ceph-mon[74802]: pgmap v1641: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:40 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:40.170 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:57:40 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:40.173 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated
Oct 01 13:57:40 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:40.175 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:57:40 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:40.176 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[5342a964-f8cd-48ee-916d-b3bdd3e5d0ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:57:40 compute-0 ceph-mon[74802]: pgmap v1642: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:42 compute-0 ceph-mon[74802]: pgmap v1643: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:43.025 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:57:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:43.028 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated
Oct 01 13:57:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:43.031 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:57:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:43.032 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[17acc2a0-62db-4e3c-a7b8-d63652f64fdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:57:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:44 compute-0 ceph-mon[74802]: pgmap v1644: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:44 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:44.727 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:57:44 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:44.729 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:57:44 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:44.731 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:57:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:46 compute-0 ceph-mon[74802]: pgmap v1645: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:46 compute-0 podman[291022]: 2025-10-01 13:57:46.550260969 +0000 UTC m=+0.090667740 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 13:57:46 compute-0 podman[291020]: 2025-10-01 13:57:46.568493999 +0000 UTC m=+0.109476889 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 13:57:46 compute-0 podman[291021]: 2025-10-01 13:57:46.601486501 +0000 UTC m=+0.139685512 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 13:57:46 compute-0 podman[291019]: 2025-10-01 13:57:46.62876158 +0000 UTC m=+0.169909414 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:47 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:47.430 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:57:47 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:47.432 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated
Oct 01 13:57:47 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:47.434 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:57:47 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:47.435 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[93d27265-026f-4b65-bbc1-4744c57602c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:57:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:57:47
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Oct 01 13:57:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:57:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:57:48 compute-0 ceph-mon[74802]: pgmap v1646: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:49 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:49.024 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:57:49 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:49.026 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated
Oct 01 13:57:49 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:49.028 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:57:49 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:49.029 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[a771a8ba-0383-48fc-8760-9cbfcd63a1b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:57:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:50 compute-0 ceph-mon[74802]: pgmap v1647: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:52 compute-0 ceph-mon[74802]: pgmap v1648: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:54 compute-0 ceph-mon[74802]: pgmap v1649: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:57:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/981184135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:57:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:57:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/981184135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:57:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/981184135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:57:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/981184135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:57:55 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:55.908 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 10.100.0.2 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:57:55 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:55.910 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated
Oct 01 13:57:55 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:55.911 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:57:55 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:57:55.912 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[478d397f-8fa4-459d-8530-beab3304af25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:57:56 compute-0 ceph-mon[74802]: pgmap v1650: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:57:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:57:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:57:58 compute-0 ceph-mon[74802]: pgmap v1651: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:57:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:58:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7359 writes, 33K keys, 7359 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7359 writes, 7359 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1396 writes, 6303 keys, 1396 commit groups, 1.0 writes per commit group, ingest: 8.98 MB, 0.01 MB/s
                                           Interval WAL: 1396 writes, 1396 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.6      2.21              0.16        19    0.116       0      0       0.0       0.0
                                             L6      1/0    8.83 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5     45.8     37.4      3.58              0.50        18    0.199     87K    10K       0.0       0.0
                                            Sum      1/0    8.83 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5     28.3     29.9      5.79              0.66        37    0.156     87K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     78.0     80.8      0.51              0.16         8    0.064     23K   2557       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     45.8     37.4      3.58              0.50        18    0.199     87K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.6      2.20              0.16        18    0.122       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 5.8 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 304.00 MB usage: 20.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00016 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1299,19.52 MB,6.42065%) FilterBlock(38,249.17 KB,0.0800434%) IndexBlock(38,449.00 KB,0.144236%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 13:58:00 compute-0 ceph-mon[74802]: pgmap v1652: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:02 compute-0 ceph-mon[74802]: pgmap v1653: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:03 compute-0 ceph-mon[74802]: pgmap v1654: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:06 compute-0 ceph-mon[74802]: pgmap v1655: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:07 compute-0 nova_compute[260022]: 2025-10-01 13:58:07.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:08 compute-0 ceph-mon[74802]: pgmap v1656: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:10 compute-0 ceph-mon[74802]: pgmap v1657: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:12 compute-0 ceph-mon[74802]: pgmap v1658: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:58:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:58:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:12.325 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:58:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:14 compute-0 ceph-mon[74802]: pgmap v1659: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:16 compute-0 ceph-mon[74802]: pgmap v1660: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:17 compute-0 podman[291110]: 2025-10-01 13:58:17.5524183 +0000 UTC m=+0.081515598 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923)
Oct 01 13:58:17 compute-0 podman[291103]: 2025-10-01 13:58:17.56432313 +0000 UTC m=+0.100844534 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct 01 13:58:17 compute-0 podman[291102]: 2025-10-01 13:58:17.574216574 +0000 UTC m=+0.127995859 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:58:17 compute-0 podman[291104]: 2025-10-01 13:58:17.582680934 +0000 UTC m=+0.123229837 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct 01 13:58:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:58:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:58:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:58:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:58:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:58:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:58:18 compute-0 ceph-mon[74802]: pgmap v1661: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.378 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.378 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:58:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:58:18 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859132558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.791 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.983 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.985 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:58:18 compute-0 nova_compute[260022]: 2025-10-01 13:58:18.986 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.082 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.097 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.098 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.098 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:58:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.151 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:58:19 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1859132558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:58:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:58:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1055574916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.571 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.579 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.610 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.613 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:58:19 compute-0 nova_compute[260022]: 2025-10-01 13:58:19.614 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:58:20 compute-0 ceph-mon[74802]: pgmap v1662: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:20 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1055574916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:58:20 compute-0 nova_compute[260022]: 2025-10-01 13:58:20.615 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:22 compute-0 ceph-mon[74802]: pgmap v1663: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:22 compute-0 nova_compute[260022]: 2025-10-01 13:58:22.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:23 compute-0 nova_compute[260022]: 2025-10-01 13:58:23.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:23 compute-0 nova_compute[260022]: 2025-10-01 13:58:23.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:58:23 compute-0 nova_compute[260022]: 2025-10-01 13:58:23.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:58:23 compute-0 nova_compute[260022]: 2025-10-01 13:58:23.365 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:58:24 compute-0 ceph-mon[74802]: pgmap v1664: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:24 compute-0 nova_compute[260022]: 2025-10-01 13:58:24.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:25 compute-0 nova_compute[260022]: 2025-10-01 13:58:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:26 compute-0 ceph-mon[74802]: pgmap v1665: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:26 compute-0 nova_compute[260022]: 2025-10-01 13:58:26.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:26 compute-0 sudo[291226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:58:26 compute-0 sudo[291226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:26 compute-0 sudo[291226]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:27 compute-0 sudo[291251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:58:27 compute-0 sudo[291251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:27 compute-0 sudo[291251]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:27 compute-0 sudo[291276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:58:27 compute-0 sudo[291276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:27 compute-0 sudo[291276]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:27 compute-0 sudo[291301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:58:27 compute-0 sudo[291301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:27 compute-0 sudo[291301]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:58:27 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:58:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:58:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:58:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:58:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:58:27 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3bb15911-aab8-46cb-b4c9-4db6c8a9a10b does not exist
Oct 01 13:58:27 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9438680a-64ef-4095-bcca-8f95563050f3 does not exist
Oct 01 13:58:27 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6218f70a-1777-4de1-ada3-29b367836da1 does not exist
Oct 01 13:58:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:58:27 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:58:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:58:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:58:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:58:27 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:58:27 compute-0 sudo[291358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:58:27 compute-0 sudo[291358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:27 compute-0 sudo[291358]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:28 compute-0 sudo[291383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:58:28 compute-0 sudo[291383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:28 compute-0 sudo[291383]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:28 compute-0 sudo[291408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:58:28 compute-0 sudo[291408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:28 compute-0 sudo[291408]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:28 compute-0 sudo[291433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:58:28 compute-0 sudo[291433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:28 compute-0 ceph-mon[74802]: pgmap v1666: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:58:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:58:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:58:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:58:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:58:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:58:28 compute-0 podman[291499]: 2025-10-01 13:58:28.693809624 +0000 UTC m=+0.071684165 container create 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:58:28 compute-0 systemd[1]: Started libpod-conmon-475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d.scope.
Oct 01 13:58:28 compute-0 podman[291499]: 2025-10-01 13:58:28.661772453 +0000 UTC m=+0.039646974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:58:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:58:28 compute-0 podman[291499]: 2025-10-01 13:58:28.817252887 +0000 UTC m=+0.195127468 container init 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 13:58:28 compute-0 podman[291499]: 2025-10-01 13:58:28.829250709 +0000 UTC m=+0.207125240 container start 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 13:58:28 compute-0 podman[291499]: 2025-10-01 13:58:28.833076362 +0000 UTC m=+0.210950913 container attach 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 13:58:28 compute-0 busy_lumiere[291515]: 167 167
Oct 01 13:58:28 compute-0 systemd[1]: libpod-475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d.scope: Deactivated successfully.
Oct 01 13:58:28 compute-0 podman[291499]: 2025-10-01 13:58:28.838790454 +0000 UTC m=+0.216664985 container died 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 13:58:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-682fac5bccea113c7f46852d65cd4eb2d1853ad6bd23db8e8fa89a9483e336bf-merged.mount: Deactivated successfully.
Oct 01 13:58:28 compute-0 podman[291499]: 2025-10-01 13:58:28.891390639 +0000 UTC m=+0.269265170 container remove 475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 13:58:28 compute-0 systemd[1]: libpod-conmon-475a66fa5892fdf338cf50d7bd7f184fe9f19d85a1e2260814c12d6d2e9c910d.scope: Deactivated successfully.
Oct 01 13:58:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:29 compute-0 podman[291537]: 2025-10-01 13:58:29.150473644 +0000 UTC m=+0.075616280 container create 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 01 13:58:29 compute-0 systemd[1]: Started libpod-conmon-94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1.scope.
Oct 01 13:58:29 compute-0 podman[291537]: 2025-10-01 13:58:29.119773796 +0000 UTC m=+0.044916482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:58:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:29 compute-0 podman[291537]: 2025-10-01 13:58:29.278960318 +0000 UTC m=+0.204102954 container init 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:58:29 compute-0 podman[291537]: 2025-10-01 13:58:29.295808344 +0000 UTC m=+0.220950970 container start 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:58:29 compute-0 podman[291537]: 2025-10-01 13:58:29.301138024 +0000 UTC m=+0.226280700 container attach 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:58:30 compute-0 ceph-mon[74802]: pgmap v1667: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:30 compute-0 silly_jones[291554]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:58:30 compute-0 silly_jones[291554]: --> relative data size: 1.0
Oct 01 13:58:30 compute-0 silly_jones[291554]: --> All data devices are unavailable
Oct 01 13:58:30 compute-0 systemd[1]: libpod-94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1.scope: Deactivated successfully.
Oct 01 13:58:30 compute-0 systemd[1]: libpod-94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1.scope: Consumed 1.135s CPU time.
Oct 01 13:58:30 compute-0 podman[291537]: 2025-10-01 13:58:30.467075063 +0000 UTC m=+1.392217659 container died 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b0184a0dfa7e2b78d693753b394e83d5b536baba72c38addd7964a8525a6460-merged.mount: Deactivated successfully.
Oct 01 13:58:30 compute-0 podman[291537]: 2025-10-01 13:58:30.530868026 +0000 UTC m=+1.456010642 container remove 94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 13:58:30 compute-0 systemd[1]: libpod-conmon-94e7d4a9041d1627d09dc727a8548bfda035ac75332a92ecc9ac41928a8e19a1.scope: Deactivated successfully.
Oct 01 13:58:30 compute-0 sudo[291433]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:30 compute-0 sudo[291598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:58:30 compute-0 sudo[291598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:30 compute-0 sudo[291598]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:30 compute-0 sudo[291623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:58:30 compute-0 sudo[291623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:30 compute-0 sudo[291623]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:30 compute-0 sudo[291648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:58:30 compute-0 sudo[291648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:30 compute-0 sudo[291648]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:30 compute-0 sudo[291673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:58:30 compute-0 sudo[291673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:31 compute-0 podman[291739]: 2025-10-01 13:58:31.312851031 +0000 UTC m=+0.025932527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:58:31 compute-0 podman[291739]: 2025-10-01 13:58:31.441357466 +0000 UTC m=+0.154438902 container create 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:58:31 compute-0 ceph-mon[74802]: pgmap v1668: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:31 compute-0 systemd[1]: Started libpod-conmon-040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a.scope.
Oct 01 13:58:31 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:58:31 compute-0 podman[291739]: 2025-10-01 13:58:31.651504001 +0000 UTC m=+0.364585477 container init 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:58:31 compute-0 podman[291739]: 2025-10-01 13:58:31.663650209 +0000 UTC m=+0.376731655 container start 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:58:31 compute-0 exciting_noyce[291755]: 167 167
Oct 01 13:58:31 compute-0 systemd[1]: libpod-040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a.scope: Deactivated successfully.
Oct 01 13:58:31 compute-0 podman[291739]: 2025-10-01 13:58:31.718322 +0000 UTC m=+0.431403496 container attach 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:58:31 compute-0 podman[291739]: 2025-10-01 13:58:31.719463907 +0000 UTC m=+0.432545373 container died 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:58:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8eb4abcab61d3af1dd9f7f0449fc6b09f5e40007dc2a7a01ca4c839f817ab3df-merged.mount: Deactivated successfully.
Oct 01 13:58:31 compute-0 podman[291739]: 2025-10-01 13:58:31.973624815 +0000 UTC m=+0.686706261 container remove 040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 13:58:31 compute-0 systemd[1]: libpod-conmon-040ff8e2704be16e87ff38d2e54d10d1b3a0b6652c9d643b93c2205ff404af6a.scope: Deactivated successfully.
Oct 01 13:58:32 compute-0 podman[291779]: 2025-10-01 13:58:32.258958815 +0000 UTC m=+0.124149526 container create c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:58:32 compute-0 podman[291779]: 2025-10-01 13:58:32.174154974 +0000 UTC m=+0.039345755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:58:32 compute-0 systemd[1]: Started libpod-conmon-c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf.scope.
Oct 01 13:58:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:32 compute-0 podman[291779]: 2025-10-01 13:58:32.404060689 +0000 UTC m=+0.269251450 container init c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:58:32 compute-0 podman[291779]: 2025-10-01 13:58:32.419240613 +0000 UTC m=+0.284431314 container start c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:58:32 compute-0 podman[291779]: 2025-10-01 13:58:32.423090715 +0000 UTC m=+0.288281426 container attach c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:58:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:33 compute-0 sweet_shaw[291795]: {
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:     "0": [
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:         {
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "devices": [
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "/dev/loop3"
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             ],
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_name": "ceph_lv0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_size": "21470642176",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "name": "ceph_lv0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "tags": {
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.cluster_name": "ceph",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.crush_device_class": "",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.encrypted": "0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.osd_id": "0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.type": "block",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.vdo": "0"
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             },
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "type": "block",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "vg_name": "ceph_vg0"
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:         }
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:     ],
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:     "1": [
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:         {
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "devices": [
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "/dev/loop4"
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             ],
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_name": "ceph_lv1",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_size": "21470642176",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "name": "ceph_lv1",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "tags": {
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.cluster_name": "ceph",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.crush_device_class": "",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.encrypted": "0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.osd_id": "1",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.type": "block",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.vdo": "0"
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             },
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "type": "block",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "vg_name": "ceph_vg1"
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:         }
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:     ],
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:     "2": [
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:         {
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "devices": [
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "/dev/loop5"
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             ],
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_name": "ceph_lv2",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_size": "21470642176",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "name": "ceph_lv2",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "tags": {
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.cluster_name": "ceph",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.crush_device_class": "",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.encrypted": "0",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.osd_id": "2",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.type": "block",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:                 "ceph.vdo": "0"
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             },
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "type": "block",
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:             "vg_name": "ceph_vg2"
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:         }
Oct 01 13:58:33 compute-0 sweet_shaw[291795]:     ]
Oct 01 13:58:33 compute-0 sweet_shaw[291795]: }
Oct 01 13:58:33 compute-0 systemd[1]: libpod-c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf.scope: Deactivated successfully.
Oct 01 13:58:33 compute-0 podman[291779]: 2025-10-01 13:58:33.227003039 +0000 UTC m=+1.092193770 container died c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a58d39e5c3387de8b35931b899ef238380eb8fbb87580cff7143c9a00974e4fb-merged.mount: Deactivated successfully.
Oct 01 13:58:33 compute-0 podman[291779]: 2025-10-01 13:58:33.278863352 +0000 UTC m=+1.144054043 container remove c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 13:58:33 compute-0 systemd[1]: libpod-conmon-c4d44415f71b4a475783bd0e4f9751645526bb559115cfe7edbdfef78969c5bf.scope: Deactivated successfully.
Oct 01 13:58:33 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:33.320 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8:0:1:f816:3eff:fe88:2a05'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5725087-c524-4c7a-9e75-2b25ff830453, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=9037b2e3-f1e6-4cf5-be59-84263673dd05) old=Port_Binding(mac=['fa:16:3e:88:2a:05 2001:db8::f816:3eff:fe88:2a05'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe88:2a05/64', 'neutron:device_id': 'ovnmeta-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67be5091-48d6-486d-85f6-aba0fd30503c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'caaa21e6a33148468bcc047eb7b8901f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:58:33 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:33.322 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 9037b2e3-f1e6-4cf5-be59-84263673dd05 in datapath 67be5091-48d6-486d-85f6-aba0fd30503c updated
Oct 01 13:58:33 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:33.323 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67be5091-48d6-486d-85f6-aba0fd30503c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:58:33 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:33.324 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[930f0193-2a13-4674-b7e4-96bd426cfad1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:58:33 compute-0 sudo[291673]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:33 compute-0 nova_compute[260022]: 2025-10-01 13:58:33.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:58:33 compute-0 sudo[291816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:58:33 compute-0 sudo[291816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:33 compute-0 sudo[291816]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:33 compute-0 sudo[291841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:58:33 compute-0 sudo[291841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:33 compute-0 sudo[291841]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:33 compute-0 sudo[291866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:58:33 compute-0 sudo[291866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:33 compute-0 sudo[291866]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:33 compute-0 sudo[291891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:58:33 compute-0 sudo[291891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:34 compute-0 podman[291955]: 2025-10-01 13:58:34.086790583 +0000 UTC m=+0.067140870 container create cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:58:34 compute-0 systemd[1]: Started libpod-conmon-cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b.scope.
Oct 01 13:58:34 compute-0 podman[291955]: 2025-10-01 13:58:34.057535271 +0000 UTC m=+0.037885608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:58:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:58:34 compute-0 podman[291955]: 2025-10-01 13:58:34.17803576 +0000 UTC m=+0.158386087 container init cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 13:58:34 compute-0 podman[291955]: 2025-10-01 13:58:34.186482229 +0000 UTC m=+0.166832476 container start cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:58:34 compute-0 podman[291955]: 2025-10-01 13:58:34.190823858 +0000 UTC m=+0.171174195 container attach cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:58:34 compute-0 musing_ishizaka[291972]: 167 167
Oct 01 13:58:34 compute-0 systemd[1]: libpod-cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b.scope: Deactivated successfully.
Oct 01 13:58:34 compute-0 podman[291955]: 2025-10-01 13:58:34.194187196 +0000 UTC m=+0.174537483 container died cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 13:58:34 compute-0 ceph-mon[74802]: pgmap v1669: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc212568b06175986d5b012a4fd8b5f3f6b395a28753ad35fe2fc5927e59944a-merged.mount: Deactivated successfully.
Oct 01 13:58:34 compute-0 podman[291955]: 2025-10-01 13:58:34.248460844 +0000 UTC m=+0.228811121 container remove cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 13:58:34 compute-0 systemd[1]: libpod-conmon-cd44832708f1aad82ecec95ce9c0c779d0fff8c1f28c4606d86a1a6f839c531b.scope: Deactivated successfully.
Oct 01 13:58:34 compute-0 podman[291996]: 2025-10-01 13:58:34.516248607 +0000 UTC m=+0.077450109 container create 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:58:34 compute-0 systemd[1]: Started libpod-conmon-72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135.scope.
Oct 01 13:58:34 compute-0 podman[291996]: 2025-10-01 13:58:34.480715864 +0000 UTC m=+0.041917416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:58:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:58:34 compute-0 podman[291996]: 2025-10-01 13:58:34.61521622 +0000 UTC m=+0.176417732 container init 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:58:34 compute-0 podman[291996]: 2025-10-01 13:58:34.629910368 +0000 UTC m=+0.191111880 container start 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 13:58:34 compute-0 podman[291996]: 2025-10-01 13:58:34.634474953 +0000 UTC m=+0.195676515 container attach 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:58:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:35 compute-0 sharp_tesla[292012]: {
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "osd_id": 0,
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "type": "bluestore"
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:     },
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "osd_id": 2,
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "type": "bluestore"
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:     },
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "osd_id": 1,
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:         "type": "bluestore"
Oct 01 13:58:35 compute-0 sharp_tesla[292012]:     }
Oct 01 13:58:35 compute-0 sharp_tesla[292012]: }
Oct 01 13:58:35 compute-0 systemd[1]: libpod-72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135.scope: Deactivated successfully.
Oct 01 13:58:35 compute-0 systemd[1]: libpod-72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135.scope: Consumed 1.029s CPU time.
Oct 01 13:58:35 compute-0 podman[292045]: 2025-10-01 13:58:35.687436852 +0000 UTC m=+0.023788548 container died 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 13:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-66cb6796dd3a20a92a46b643fc65b081e88eb98b29434969fff4d47909773dd1-merged.mount: Deactivated successfully.
Oct 01 13:58:35 compute-0 podman[292045]: 2025-10-01 13:58:35.757772413 +0000 UTC m=+0.094124079 container remove 72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:58:35 compute-0 systemd[1]: libpod-conmon-72149a6b4738823c85dde9d7d0388f5c86065a903afb40bbd8c8a9cea1945135.scope: Deactivated successfully.
Oct 01 13:58:35 compute-0 sudo[291891]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:58:35 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:58:35 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:58:35 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:58:35 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8a15530c-400d-4007-a8a8-c863b935dfad does not exist
Oct 01 13:58:35 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev d6cf6ac1-d7ab-4eb0-aa65-e79d885c4899 does not exist
Oct 01 13:58:35 compute-0 sudo[292060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:58:35 compute-0 sudo[292060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:35 compute-0 sudo[292060]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:36 compute-0 sudo[292085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:58:36 compute-0 sudo[292085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:58:36 compute-0 sudo[292085]: pam_unix(sudo:session): session closed for user root
Oct 01 13:58:36 compute-0 ceph-mon[74802]: pgmap v1670: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:36 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:58:36 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:58:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:38 compute-0 ceph-mon[74802]: pgmap v1671: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:40 compute-0 ceph-mon[74802]: pgmap v1672: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:42 compute-0 ceph-mon[74802]: pgmap v1673: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:44 compute-0 ceph-mon[74802]: pgmap v1674: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:44 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:44.926 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:58:44 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:44.932 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:58:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:46 compute-0 ceph-mon[74802]: pgmap v1675: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:58:47
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', 'vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log']
Oct 01 13:58:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:58:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:58:48 compute-0 ceph-mon[74802]: pgmap v1676: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:48 compute-0 podman[292111]: 2025-10-01 13:58:48.571724056 +0000 UTC m=+0.116826793 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, config_id=multipathd, container_name=multipathd)
Oct 01 13:58:48 compute-0 podman[292112]: 2025-10-01 13:58:48.580408173 +0000 UTC m=+0.120735998 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:58:48 compute-0 podman[292113]: 2025-10-01 13:58:48.582398156 +0000 UTC m=+0.116989568 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 13:58:48 compute-0 podman[292110]: 2025-10-01 13:58:48.585172835 +0000 UTC m=+0.137256485 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct 01 13:58:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:50 compute-0 ceph-mon[74802]: pgmap v1677: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:51 compute-0 ceph-mon[74802]: pgmap v1678: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:53 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:58:53.935 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:58:54 compute-0 ceph-mon[74802]: pgmap v1679: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:58:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288620281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:58:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:58:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288620281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:58:56 compute-0 ceph-mon[74802]: pgmap v1680: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1288620281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:58:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1288620281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:57 compute-0 ceph-mon[74802]: pgmap v1681: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:58:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:58:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:58:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:00 compute-0 ceph-mon[74802]: pgmap v1682: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:02 compute-0 ceph-mon[74802]: pgmap v1683: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:04 compute-0 ceph-mon[74802]: pgmap v1684: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:06 compute-0 ceph-mon[74802]: pgmap v1685: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:08 compute-0 ceph-mon[74802]: pgmap v1686: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:09 compute-0 nova_compute[260022]: 2025-10-01 13:59:09.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:59:10 compute-0 ceph-mon[74802]: pgmap v1687: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:12 compute-0 ceph-mon[74802]: pgmap v1688: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:12.326 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:59:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:12.326 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:59:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:12.326 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:59:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:14 compute-0 ceph-mon[74802]: pgmap v1689: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:16 compute-0 ceph-mon[74802]: pgmap v1690: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:17 compute-0 ceph-mon[74802]: pgmap v1691: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:59:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:59:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:59:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:59:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:59:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:59:18 compute-0 nova_compute[260022]: 2025-10-01 13:59:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:59:18 compute-0 nova_compute[260022]: 2025-10-01 13:59:18.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 13:59:18 compute-0 nova_compute[260022]: 2025-10-01 13:59:18.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:59:18 compute-0 nova_compute[260022]: 2025-10-01 13:59:18.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:59:18 compute-0 nova_compute[260022]: 2025-10-01 13:59:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:59:18 compute-0 nova_compute[260022]: 2025-10-01 13:59:18.377 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:59:18 compute-0 nova_compute[260022]: 2025-10-01 13:59:18.377 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 13:59:18 compute-0 nova_compute[260022]: 2025-10-01 13:59:18.378 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:59:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:59:18 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867861185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:59:18 compute-0 nova_compute[260022]: 2025-10-01 13:59:18.837 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:59:18 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3867861185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.037 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.039 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5069MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.039 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.039 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.114 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.130 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 13:59:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.190 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 13:59:19 compute-0 podman[292233]: 2025-10-01 13:59:19.542367428 +0000 UTC m=+0.085760784 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true)
Oct 01 13:59:19 compute-0 podman[292232]: 2025-10-01 13:59:19.5518826 +0000 UTC m=+0.095312547 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Oct 01 13:59:19 compute-0 podman[292231]: 2025-10-01 13:59:19.559001926 +0000 UTC m=+0.110049115 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 01 13:59:19 compute-0 podman[292234]: 2025-10-01 13:59:19.569659215 +0000 UTC m=+0.097579890 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 13:59:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 13:59:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633742378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.761 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.769 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.796 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.799 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 13:59:19 compute-0 nova_compute[260022]: 2025-10-01 13:59:19.799 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 13:59:19 compute-0 ceph-mon[74802]: pgmap v1692: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:19 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/633742378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 13:59:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:21 compute-0 nova_compute[260022]: 2025-10-01 13:59:21.800 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:59:22 compute-0 ceph-mon[74802]: pgmap v1693: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:24 compute-0 ceph-mon[74802]: pgmap v1694: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:24 compute-0 nova_compute[260022]: 2025-10-01 13:59:24.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:59:24 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:24.835 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:0b:33 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-50a4e638-13aa-4e3b-9865-06961dbe3cce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50a4e638-13aa-4e3b-9865-06961dbe3cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99bf9a35-dc20-46cb-b2ee-481ce616830d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=2b082376-98fa-47be-a696-7bcedb47b129) old=Port_Binding(mac=['fa:16:3e:e3:0b:33 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-50a4e638-13aa-4e3b-9865-06961dbe3cce', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50a4e638-13aa-4e3b-9865-06961dbe3cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:59:24 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:24.837 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 2b082376-98fa-47be-a696-7bcedb47b129 in datapath 50a4e638-13aa-4e3b-9865-06961dbe3cce updated
Oct 01 13:59:24 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:24.838 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50a4e638-13aa-4e3b-9865-06961dbe3cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:59:24 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:24.839 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[b9362add-a25b-41ac-a3ca-c76001360d55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:59:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:25 compute-0 nova_compute[260022]: 2025-10-01 13:59:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:59:25 compute-0 nova_compute[260022]: 2025-10-01 13:59:25.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 13:59:25 compute-0 nova_compute[260022]: 2025-10-01 13:59:25.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 13:59:25 compute-0 nova_compute[260022]: 2025-10-01 13:59:25.362 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 13:59:25 compute-0 nova_compute[260022]: 2025-10-01 13:59:25.362 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:59:26 compute-0 ceph-mon[74802]: pgmap v1695: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:26 compute-0 nova_compute[260022]: 2025-10-01 13:59:26.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:59:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:27 compute-0 nova_compute[260022]: 2025-10-01 13:59:27.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 13:59:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.718124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167718239, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1500, "num_deletes": 256, "total_data_size": 2412740, "memory_usage": 2453504, "flush_reason": "Manual Compaction"}
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167733778, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2379310, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32566, "largest_seqno": 34065, "table_properties": {"data_size": 2372191, "index_size": 4190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14217, "raw_average_key_size": 19, "raw_value_size": 2358102, "raw_average_value_size": 3248, "num_data_blocks": 187, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327005, "oldest_key_time": 1759327005, "file_creation_time": 1759327167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 15694 microseconds, and 6600 cpu microseconds.
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.733835) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2379310 bytes OK
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.733862) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736018) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736030) EVENT_LOG_v1 {"time_micros": 1759327167736026, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736052) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2406160, prev total WAL file size 2406160, number of live WAL files 2.
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736972) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303130' seq:72057594037927935, type:22 .. '6C6F676D0031323632' seq:0, type:0; will stop at (end)
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2323KB)], [71(9038KB)]
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167737032, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11635182, "oldest_snapshot_seqno": -1}
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5690 keys, 11528813 bytes, temperature: kUnknown
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167817157, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11528813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11485924, "index_size": 27547, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 142668, "raw_average_key_size": 25, "raw_value_size": 11378344, "raw_average_value_size": 1999, "num_data_blocks": 1136, "num_entries": 5690, "num_filter_entries": 5690, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.817683) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11528813 bytes
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.819296) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.1 rd, 143.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 8.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 6214, records dropped: 524 output_compression: NoCompression
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.819324) EVENT_LOG_v1 {"time_micros": 1759327167819308, "job": 40, "event": "compaction_finished", "compaction_time_micros": 80205, "compaction_time_cpu_micros": 44264, "output_level": 6, "num_output_files": 1, "total_output_size": 11528813, "num_input_records": 6214, "num_output_records": 5690, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167820438, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327167823779, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.736834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:27 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:27.823951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:28 compute-0 ceph-mon[74802]: pgmap v1696: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:30 compute-0 ceph-mon[74802]: pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:32 compute-0 ceph-mon[74802]: pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.756486) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172756530, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 250, "total_data_size": 70904, "memory_usage": 76200, "flush_reason": "Manual Compaction"}
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172805611, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 69916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34066, "largest_seqno": 34355, "table_properties": {"data_size": 67987, "index_size": 157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5471, "raw_average_key_size": 20, "raw_value_size": 64204, "raw_average_value_size": 236, "num_data_blocks": 7, "num_entries": 271, "num_filter_entries": 271, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327168, "oldest_key_time": 1759327168, "file_creation_time": 1759327172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 49173 microseconds, and 1254 cpu microseconds.
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.805659) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 69916 bytes OK
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.805691) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.825793) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.825835) EVENT_LOG_v1 {"time_micros": 1759327172825825, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.825859) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 68768, prev total WAL file size 68768, number of live WAL files 2.
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.826373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(68KB)], [74(10MB)]
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172826414, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11598729, "oldest_snapshot_seqno": -1}
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5454 keys, 8306792 bytes, temperature: kUnknown
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172911544, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 8306792, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8270332, "index_size": 21694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137954, "raw_average_key_size": 25, "raw_value_size": 8171667, "raw_average_value_size": 1498, "num_data_blocks": 891, "num_entries": 5454, "num_filter_entries": 5454, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327172, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.911965) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 8306792 bytes
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.916966) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.1 rd, 97.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(284.7) write-amplify(118.8) OK, records in: 5961, records dropped: 507 output_compression: NoCompression
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.917009) EVENT_LOG_v1 {"time_micros": 1759327172916991, "job": 42, "event": "compaction_finished", "compaction_time_micros": 85227, "compaction_time_cpu_micros": 38238, "output_level": 6, "num_output_files": 1, "total_output_size": 8306792, "num_input_records": 5961, "num_output_records": 5454, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172917219, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327172921808, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.826281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:32 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-13:59:32.921950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 13:59:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:33 compute-0 ceph-mon[74802]: pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:36 compute-0 sudo[292312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:59:36 compute-0 sudo[292312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:36 compute-0 sudo[292312]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:36 compute-0 sudo[292337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:59:36 compute-0 sudo[292337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:36 compute-0 sudo[292337]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:36 compute-0 sudo[292362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:59:36 compute-0 sudo[292362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:36 compute-0 sudo[292362]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:36 compute-0 ceph-mon[74802]: pgmap v1700: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:36 compute-0 sudo[292387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 13:59:36 compute-0 sudo[292387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:36 compute-0 sudo[292387]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:59:37 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:59:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 13:59:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:59:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 13:59:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:59:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:37 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 83f0502a-cb51-4945-9d51-9a0ed244a7f4 does not exist
Oct 01 13:59:37 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 981aa2d8-18dc-4178-98be-aa1a22ea9314 does not exist
Oct 01 13:59:37 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 1240048c-ee5e-45f6-912e-801e57676533 does not exist
Oct 01 13:59:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 13:59:37 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:59:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 13:59:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:59:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 13:59:37 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:59:37 compute-0 sudo[292443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:59:37 compute-0 sudo[292443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:37 compute-0 sudo[292443]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:37 compute-0 sudo[292468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:59:37 compute-0 sudo[292468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:37 compute-0 sudo[292468]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:37 compute-0 sudo[292493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:59:37 compute-0 sudo[292493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:37 compute-0 sudo[292493]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:59:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 13:59:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:59:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 13:59:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 13:59:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 13:59:37 compute-0 sudo[292518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 13:59:37 compute-0 sudo[292518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:38 compute-0 podman[292582]: 2025-10-01 13:59:37.97692739 +0000 UTC m=+0.026647257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:59:38 compute-0 podman[292582]: 2025-10-01 13:59:38.161902374 +0000 UTC m=+0.211622231 container create 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:59:38 compute-0 systemd[1]: Started libpod-conmon-0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e.scope.
Oct 01 13:59:38 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:59:38 compute-0 podman[292582]: 2025-10-01 13:59:38.551528564 +0000 UTC m=+0.601248491 container init 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:59:38 compute-0 ceph-mon[74802]: pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:38 compute-0 podman[292582]: 2025-10-01 13:59:38.565198678 +0000 UTC m=+0.614918515 container start 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 13:59:38 compute-0 wizardly_volhard[292599]: 167 167
Oct 01 13:59:38 compute-0 systemd[1]: libpod-0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e.scope: Deactivated successfully.
Oct 01 13:59:38 compute-0 podman[292582]: 2025-10-01 13:59:38.776450736 +0000 UTC m=+0.826170653 container attach 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 13:59:38 compute-0 podman[292582]: 2025-10-01 13:59:38.778412438 +0000 UTC m=+0.828132295 container died 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-92fe14f8309538e9fffeb5ae5fba3703796c17dea40c92bf675d32665938767d-merged.mount: Deactivated successfully.
Oct 01 13:59:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:39 compute-0 podman[292582]: 2025-10-01 13:59:39.546497097 +0000 UTC m=+1.596216964 container remove 0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 13:59:39 compute-0 systemd[1]: libpod-conmon-0a60a4d52289f04e5ab2b0d7c1d1106085cff29a14fe32bd0c8d11087a3c3a3e.scope: Deactivated successfully.
Oct 01 13:59:39 compute-0 ceph-mon[74802]: pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:39 compute-0 podman[292623]: 2025-10-01 13:59:39.791614659 +0000 UTC m=+0.044736611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:59:40 compute-0 podman[292623]: 2025-10-01 13:59:40.024024238 +0000 UTC m=+0.277146151 container create 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct 01 13:59:40 compute-0 systemd[1]: Started libpod-conmon-5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8.scope.
Oct 01 13:59:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:40 compute-0 podman[292623]: 2025-10-01 13:59:40.536430038 +0000 UTC m=+0.789551990 container init 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 13:59:40 compute-0 podman[292623]: 2025-10-01 13:59:40.548523703 +0000 UTC m=+0.801645575 container start 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 13:59:40 compute-0 podman[292623]: 2025-10-01 13:59:40.741469559 +0000 UTC m=+0.994591511 container attach 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 13:59:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:41 compute-0 brave_yonath[292640]: --> passed data devices: 0 physical, 3 LVM
Oct 01 13:59:41 compute-0 brave_yonath[292640]: --> relative data size: 1.0
Oct 01 13:59:41 compute-0 brave_yonath[292640]: --> All data devices are unavailable
Oct 01 13:59:41 compute-0 systemd[1]: libpod-5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8.scope: Deactivated successfully.
Oct 01 13:59:41 compute-0 systemd[1]: libpod-5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8.scope: Consumed 1.219s CPU time.
Oct 01 13:59:41 compute-0 podman[292623]: 2025-10-01 13:59:41.828828585 +0000 UTC m=+2.081950467 container died 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8580e49c3c1a13616f88defeab0b117110ba71f50925c1cb2e8723269da3976-merged.mount: Deactivated successfully.
Oct 01 13:59:42 compute-0 podman[292623]: 2025-10-01 13:59:42.173485089 +0000 UTC m=+2.426606951 container remove 5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:59:42 compute-0 systemd[1]: libpod-conmon-5083219da4973f6ec9039a290cd04ebe4a954298b69690ee9c5d772628afd8c8.scope: Deactivated successfully.
Oct 01 13:59:42 compute-0 sudo[292518]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:42 compute-0 ceph-mon[74802]: pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:42 compute-0 sudo[292683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:59:42 compute-0 sudo[292683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:42 compute-0 sudo[292683]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:42 compute-0 sudo[292708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:59:42 compute-0 sudo[292708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:42 compute-0 sudo[292708]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:42 compute-0 sudo[292733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:59:42 compute-0 sudo[292733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:42 compute-0 sudo[292733]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:42 compute-0 sudo[292758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 13:59:42 compute-0 sudo[292758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:42 compute-0 podman[292824]: 2025-10-01 13:59:42.856289849 +0000 UTC m=+0.040895890 container create c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:59:42 compute-0 systemd[1]: Started libpod-conmon-c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9.scope.
Oct 01 13:59:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:59:42 compute-0 podman[292824]: 2025-10-01 13:59:42.839862488 +0000 UTC m=+0.024468549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:59:42 compute-0 podman[292824]: 2025-10-01 13:59:42.942684072 +0000 UTC m=+0.127290133 container init c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 13:59:42 compute-0 podman[292824]: 2025-10-01 13:59:42.951024117 +0000 UTC m=+0.135630168 container start c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 13:59:42 compute-0 podman[292824]: 2025-10-01 13:59:42.954431445 +0000 UTC m=+0.139037506 container attach c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 13:59:42 compute-0 recursing_keldysh[292840]: 167 167
Oct 01 13:59:42 compute-0 systemd[1]: libpod-c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9.scope: Deactivated successfully.
Oct 01 13:59:42 compute-0 podman[292824]: 2025-10-01 13:59:42.957219064 +0000 UTC m=+0.141825105 container died c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-160e4a2195dca62c8084b12d337270584a799ca96eefc1b11a3b01f222305ce9-merged.mount: Deactivated successfully.
Oct 01 13:59:42 compute-0 podman[292824]: 2025-10-01 13:59:42.992797193 +0000 UTC m=+0.177403254 container remove c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:59:43 compute-0 systemd[1]: libpod-conmon-c64b1caf1075c2b15fe501f758167f216c208a5c4b9e0dc566313091f8fec1c9.scope: Deactivated successfully.
Oct 01 13:59:43 compute-0 podman[292863]: 2025-10-01 13:59:43.156873193 +0000 UTC m=+0.042969485 container create 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct 01 13:59:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:43 compute-0 systemd[1]: Started libpod-conmon-0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e.scope.
Oct 01 13:59:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:43 compute-0 podman[292863]: 2025-10-01 13:59:43.137059623 +0000 UTC m=+0.023155935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:59:43 compute-0 podman[292863]: 2025-10-01 13:59:43.244932499 +0000 UTC m=+0.131028871 container init 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 13:59:43 compute-0 podman[292863]: 2025-10-01 13:59:43.255871816 +0000 UTC m=+0.141968108 container start 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 13:59:43 compute-0 podman[292863]: 2025-10-01 13:59:43.25912834 +0000 UTC m=+0.145224632 container attach 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 13:59:43 compute-0 focused_sammet[292879]: {
Oct 01 13:59:43 compute-0 focused_sammet[292879]:     "0": [
Oct 01 13:59:43 compute-0 focused_sammet[292879]:         {
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "devices": [
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "/dev/loop3"
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             ],
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_name": "ceph_lv0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_size": "21470642176",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "name": "ceph_lv0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "tags": {
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.cluster_name": "ceph",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.crush_device_class": "",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.encrypted": "0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.osd_id": "0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.type": "block",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.vdo": "0"
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             },
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "type": "block",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "vg_name": "ceph_vg0"
Oct 01 13:59:43 compute-0 focused_sammet[292879]:         }
Oct 01 13:59:43 compute-0 focused_sammet[292879]:     ],
Oct 01 13:59:43 compute-0 focused_sammet[292879]:     "1": [
Oct 01 13:59:43 compute-0 focused_sammet[292879]:         {
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "devices": [
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "/dev/loop4"
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             ],
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_name": "ceph_lv1",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_size": "21470642176",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "name": "ceph_lv1",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "tags": {
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.cluster_name": "ceph",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.crush_device_class": "",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.encrypted": "0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.osd_id": "1",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.type": "block",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.vdo": "0"
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             },
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "type": "block",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "vg_name": "ceph_vg1"
Oct 01 13:59:43 compute-0 focused_sammet[292879]:         }
Oct 01 13:59:43 compute-0 focused_sammet[292879]:     ],
Oct 01 13:59:43 compute-0 focused_sammet[292879]:     "2": [
Oct 01 13:59:43 compute-0 focused_sammet[292879]:         {
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "devices": [
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "/dev/loop5"
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             ],
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_name": "ceph_lv2",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_size": "21470642176",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "name": "ceph_lv2",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "tags": {
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.cluster_name": "ceph",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.crush_device_class": "",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.encrypted": "0",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.osd_id": "2",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.type": "block",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:                 "ceph.vdo": "0"
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             },
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "type": "block",
Oct 01 13:59:43 compute-0 focused_sammet[292879]:             "vg_name": "ceph_vg2"
Oct 01 13:59:43 compute-0 focused_sammet[292879]:         }
Oct 01 13:59:43 compute-0 focused_sammet[292879]:     ]
Oct 01 13:59:43 compute-0 focused_sammet[292879]: }
Oct 01 13:59:43 compute-0 systemd[1]: libpod-0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e.scope: Deactivated successfully.
Oct 01 13:59:43 compute-0 podman[292863]: 2025-10-01 13:59:43.99736558 +0000 UTC m=+0.883461892 container died 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 13:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9e5fb28e6fa3cc1598a14ab678440b060a3fc089c3dda2d149a262df564ca41-merged.mount: Deactivated successfully.
Oct 01 13:59:44 compute-0 podman[292863]: 2025-10-01 13:59:44.065899997 +0000 UTC m=+0.951996299 container remove 0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 13:59:44 compute-0 systemd[1]: libpod-conmon-0864d16d539eb211e9f4f74ab004899ebab92746c5a80955d03ef247730f126e.scope: Deactivated successfully.
Oct 01 13:59:44 compute-0 sudo[292758]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:44 compute-0 sudo[292899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:59:44 compute-0 sudo[292899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:44 compute-0 sudo[292899]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:44 compute-0 sudo[292924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 13:59:44 compute-0 sudo[292924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:44 compute-0 sudo[292924]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:44 compute-0 ceph-mon[74802]: pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:44 compute-0 sudo[292949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:59:44 compute-0 sudo[292949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:44 compute-0 sudo[292949]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:44 compute-0 sudo[292974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 13:59:44 compute-0 sudo[292974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:44 compute-0 podman[293040]: 2025-10-01 13:59:44.718685583 +0000 UTC m=+0.059644084 container create 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 13:59:44 compute-0 systemd[1]: Started libpod-conmon-81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe.scope.
Oct 01 13:59:44 compute-0 podman[293040]: 2025-10-01 13:59:44.688958189 +0000 UTC m=+0.029916740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:59:44 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:59:44 compute-0 podman[293040]: 2025-10-01 13:59:44.820588479 +0000 UTC m=+0.161546990 container init 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 13:59:44 compute-0 podman[293040]: 2025-10-01 13:59:44.832617161 +0000 UTC m=+0.173575622 container start 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 13:59:44 compute-0 podman[293040]: 2025-10-01 13:59:44.836611417 +0000 UTC m=+0.177569928 container attach 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 13:59:44 compute-0 stupefied_ardinghelli[293057]: 167 167
Oct 01 13:59:44 compute-0 systemd[1]: libpod-81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe.scope: Deactivated successfully.
Oct 01 13:59:44 compute-0 podman[293040]: 2025-10-01 13:59:44.839994295 +0000 UTC m=+0.180952756 container died 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 13:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-700a99753e27ab8359371e7f3744e2068468381d724c8a1344479258bd9bb9bd-merged.mount: Deactivated successfully.
Oct 01 13:59:44 compute-0 podman[293040]: 2025-10-01 13:59:44.884296002 +0000 UTC m=+0.225254463 container remove 81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ardinghelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 01 13:59:44 compute-0 systemd[1]: libpod-conmon-81f447db278275ccf5a54ea8774d05d4bfe55081e3321bd711bec89594735cfe.scope: Deactivated successfully.
Oct 01 13:59:45 compute-0 podman[293081]: 2025-10-01 13:59:45.124887271 +0000 UTC m=+0.070369616 container create 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 13:59:45 compute-0 systemd[1]: Started libpod-conmon-043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd.scope.
Oct 01 13:59:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:45 compute-0 podman[293081]: 2025-10-01 13:59:45.096987435 +0000 UTC m=+0.042469840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 13:59:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 13:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 13:59:45 compute-0 podman[293081]: 2025-10-01 13:59:45.234546113 +0000 UTC m=+0.180028518 container init 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 13:59:45 compute-0 podman[293081]: 2025-10-01 13:59:45.251515222 +0000 UTC m=+0.196997587 container start 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 13:59:45 compute-0 podman[293081]: 2025-10-01 13:59:45.264604137 +0000 UTC m=+0.210086502 container attach 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 13:59:45 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:45.317 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:59:45 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:45.322 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 13:59:45 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:45.323 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 13:59:46 compute-0 ceph-mon[74802]: pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]: {
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "osd_id": 0,
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "type": "bluestore"
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:     },
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "osd_id": 2,
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "type": "bluestore"
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:     },
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "osd_id": 1,
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:         "type": "bluestore"
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]:     }
Oct 01 13:59:46 compute-0 jovial_hamilton[293097]: }
Oct 01 13:59:46 compute-0 systemd[1]: libpod-043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd.scope: Deactivated successfully.
Oct 01 13:59:46 compute-0 systemd[1]: libpod-043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd.scope: Consumed 1.167s CPU time.
Oct 01 13:59:46 compute-0 podman[293081]: 2025-10-01 13:59:46.411007948 +0000 UTC m=+1.356490363 container died 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 13:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a438427056b4436d6077b2513b2af741d8c9fc6c9c9371f9c7e7b29f4cf34783-merged.mount: Deactivated successfully.
Oct 01 13:59:46 compute-0 podman[293081]: 2025-10-01 13:59:46.526757453 +0000 UTC m=+1.472239788 container remove 043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 13:59:46 compute-0 systemd[1]: libpod-conmon-043846ed5e22a629aef42ecb6fa1d49fa5310b9652eeedb2702421afe48ec4fd.scope: Deactivated successfully.
Oct 01 13:59:46 compute-0 sudo[292974]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 13:59:46 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:59:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 13:59:46 compute-0 sshd-session[293102]: Invalid user kevin from 80.94.95.116 port 61160
Oct 01 13:59:46 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:59:46 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 984596d4-e165-4175-8acf-8bad6e468dda does not exist
Oct 01 13:59:46 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 693d68e3-9deb-4b5e-8861-f2615a8adb10 does not exist
Oct 01 13:59:47 compute-0 sudo[293146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 13:59:47 compute-0 sudo[293146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:47 compute-0 sudo[293146]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:47 compute-0 sshd-session[293102]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 13:59:47 compute-0 sshd-session[293102]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.95.116
Oct 01 13:59:47 compute-0 sudo[293171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 13:59:47 compute-0 sudo[293171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 13:59:47 compute-0 sudo[293171]: pam_unix(sudo:session): session closed for user root
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:59:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 13:59:47 compute-0 ceph-mon[74802]: pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_13:59:47
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr']
Oct 01 13:59:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 13:59:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 13:59:48 compute-0 sshd-session[293102]: Failed password for invalid user kevin from 80.94.95.116 port 61160 ssh2
Oct 01 13:59:48 compute-0 sshd-session[293102]: Connection closed by invalid user kevin 80.94.95.116 port 61160 [preauth]
Oct 01 13:59:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:50 compute-0 ceph-mon[74802]: pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:50 compute-0 podman[293197]: 2025-10-01 13:59:50.510597607 +0000 UTC m=+0.068003111 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 13:59:50 compute-0 podman[293199]: 2025-10-01 13:59:50.528459334 +0000 UTC m=+0.072927167 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 01 13:59:50 compute-0 podman[293196]: 2025-10-01 13:59:50.537498021 +0000 UTC m=+0.097228468 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 13:59:50 compute-0 podman[293198]: 2025-10-01 13:59:50.538152932 +0000 UTC m=+0.095153183 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 13:59:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:51 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:51.436 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:ba:2c 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7db4a1de-f9f9-4576-94fa-85c21b229e1a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=4c794d15-edd5-4b11-8666-6aeef634f979) old=Port_Binding(mac=['fa:16:3e:fd:ba:2c 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 13:59:51 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:51.438 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 4c794d15-edd5-4b11-8666-6aeef634f979 in datapath d459f90f-6a0c-444c-a0eb-e01cde881120 updated
Oct 01 13:59:51 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:51.439 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d459f90f-6a0c-444c-a0eb-e01cde881120, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 13:59:51 compute-0 ovn_metadata_agent[161885]: 2025-10-01 13:59:51.441 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[964f635c-abfb-40bd-a9d9-5b0fc4b6ef8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 13:59:52 compute-0 ceph-mon[74802]: pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:54 compute-0 ceph-mon[74802]: pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 13:59:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918538823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:59:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 13:59:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918538823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:59:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3918538823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 13:59:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3918538823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 13:59:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 13:59:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7727 writes, 28K keys, 7727 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7727 writes, 1851 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 973 writes, 2331 keys, 973 commit groups, 1.0 writes per commit group, ingest: 1.18 MB, 0.00 MB/s
                                           Interval WAL: 973 writes, 437 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 13:59:56 compute-0 ceph-mon[74802]: pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 13:59:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 13:59:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 13:59:58 compute-0 ceph-mon[74802]: pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 13:59:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:00 compute-0 ceph-mon[74802]: pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:00:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 9156 writes, 34K keys, 9156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9156 writes, 2284 syncs, 4.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1205 writes, 3436 keys, 1205 commit groups, 1.0 writes per commit group, ingest: 1.86 MB, 0.00 MB/s
                                           Interval WAL: 1205 writes, 535 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:00:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:02 compute-0 ceph-mon[74802]: pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:02 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:02.688 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:ba:2c 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7db4a1de-f9f9-4576-94fa-85c21b229e1a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=4c794d15-edd5-4b11-8666-6aeef634f979) old=Port_Binding(mac=['fa:16:3e:fd:ba:2c 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d459f90f-6a0c-444c-a0eb-e01cde881120', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b660fd21a334b23965979eb62726b37', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:00:02 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:02.690 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 4c794d15-edd5-4b11-8666-6aeef634f979 in datapath d459f90f-6a0c-444c-a0eb-e01cde881120 updated
Oct 01 14:00:02 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:02.692 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d459f90f-6a0c-444c-a0eb-e01cde881120, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 14:00:02 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:02.693 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[c1aee2cf-8110-41ab-ba8a-f71a33d666bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 14:00:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:03 compute-0 ceph-mon[74802]: pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:00:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 8168 writes, 30K keys, 8168 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8168 writes, 2028 syncs, 4.03 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1293 writes, 3208 keys, 1293 commit groups, 1.0 writes per commit group, ingest: 1.64 MB, 0.00 MB/s
                                           Interval WAL: 1293 writes, 587 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:00:06 compute-0 ceph-mon[74802]: pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:07 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct 01 14:00:08 compute-0 ceph-mon[74802]: pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:10 compute-0 ceph-mon[74802]: pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:11 compute-0 nova_compute[260022]: 2025-10-01 14:00:11.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:12 compute-0 ceph-mon[74802]: pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:12.326 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:00:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:12.327 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:00:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:12.327 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:00:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:14 compute-0 ceph-mon[74802]: pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:16 compute-0 ceph-mon[74802]: pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:17 compute-0 nova_compute[260022]: 2025-10-01 14:00:17.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:00:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:00:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:00:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:00:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:00:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:00:18 compute-0 ceph-mon[74802]: pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:18 compute-0 nova_compute[260022]: 2025-10-01 14:00:18.375 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:18 compute-0 nova_compute[260022]: 2025-10-01 14:00:18.376 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:00:18 compute-0 nova_compute[260022]: 2025-10-01 14:00:18.376 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:18 compute-0 nova_compute[260022]: 2025-10-01 14:00:18.376 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 14:00:18 compute-0 nova_compute[260022]: 2025-10-01 14:00:18.395 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 14:00:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:19 compute-0 nova_compute[260022]: 2025-10-01 14:00:19.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:19 compute-0 nova_compute[260022]: 2025-10-01 14:00:19.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:00:19 compute-0 nova_compute[260022]: 2025-10-01 14:00:19.407 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:00:19 compute-0 nova_compute[260022]: 2025-10-01 14:00:19.407 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:00:19 compute-0 nova_compute[260022]: 2025-10-01 14:00:19.408 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:00:19 compute-0 nova_compute[260022]: 2025-10-01 14:00:19.408 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:00:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:00:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337189917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:00:19 compute-0 nova_compute[260022]: 2025-10-01 14:00:19.866 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.066 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.067 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5060MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.067 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.068 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.149 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.174 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.190 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 1cecb2c6-69e6-4006-b96b-9e11a42c9cb1 has allocations against this compute host but is not found in the database.
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.191 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.191 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:00:20 compute-0 ceph-mon[74802]: pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:20 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2337189917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.431 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:00:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:00:20 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452765038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.893 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.901 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.920 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.922 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:00:20 compute-0 nova_compute[260022]: 2025-10-01 14:00:20.922 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:00:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:21 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/452765038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:00:21 compute-0 podman[293320]: 2025-10-01 14:00:21.539301815 +0000 UTC m=+0.077910165 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 14:00:21 compute-0 podman[293318]: 2025-10-01 14:00:21.54384204 +0000 UTC m=+0.082796701 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20250923, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 14:00:21 compute-0 podman[293317]: 2025-10-01 14:00:21.569351109 +0000 UTC m=+0.113694121 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 14:00:21 compute-0 podman[293319]: 2025-10-01 14:00:21.581550226 +0000 UTC m=+0.115524179 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:00:21 compute-0 nova_compute[260022]: 2025-10-01 14:00:21.903 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:22 compute-0 ceph-mon[74802]: pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:24 compute-0 ceph-mon[74802]: pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:25 compute-0 nova_compute[260022]: 2025-10-01 14:00:25.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:25 compute-0 nova_compute[260022]: 2025-10-01 14:00:25.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:00:25 compute-0 nova_compute[260022]: 2025-10-01 14:00:25.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:00:25 compute-0 nova_compute[260022]: 2025-10-01 14:00:25.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:00:25 compute-0 nova_compute[260022]: 2025-10-01 14:00:25.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:26 compute-0 nova_compute[260022]: 2025-10-01 14:00:26.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:26 compute-0 nova_compute[260022]: 2025-10-01 14:00:26.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:26 compute-0 ceph-mon[74802]: pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:28 compute-0 ceph-mon[74802]: pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:29 compute-0 nova_compute[260022]: 2025-10-01 14:00:29.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:30 compute-0 ceph-mon[74802]: pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:31 compute-0 ceph-mon[74802]: pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:32 compute-0 nova_compute[260022]: 2025-10-01 14:00:32.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:32 compute-0 nova_compute[260022]: 2025-10-01 14:00:32.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 14:00:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:34 compute-0 ceph-mon[74802]: pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:36 compute-0 ceph-mon[74802]: pgmap v1730: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.282314) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236282366, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 757, "num_deletes": 251, "total_data_size": 961585, "memory_usage": 974984, "flush_reason": "Manual Compaction"}
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236290944, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 952583, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34356, "largest_seqno": 35112, "table_properties": {"data_size": 948651, "index_size": 1712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8738, "raw_average_key_size": 19, "raw_value_size": 940789, "raw_average_value_size": 2095, "num_data_blocks": 76, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327173, "oldest_key_time": 1759327173, "file_creation_time": 1759327236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 8687 microseconds, and 5477 cpu microseconds.
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.290997) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 952583 bytes OK
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.291029) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.292690) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.292712) EVENT_LOG_v1 {"time_micros": 1759327236292705, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.292762) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 957739, prev total WAL file size 957739, number of live WAL files 2.
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.293628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(930KB)], [77(8112KB)]
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236293699, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 9259375, "oldest_snapshot_seqno": -1}
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5389 keys, 7498110 bytes, temperature: kUnknown
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236353001, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7498110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7463004, "index_size": 20532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 137265, "raw_average_key_size": 25, "raw_value_size": 7366295, "raw_average_value_size": 1366, "num_data_blocks": 835, "num_entries": 5389, "num_filter_entries": 5389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.353300) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7498110 bytes
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.355751) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.9 rd, 126.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 7.9 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(17.6) write-amplify(7.9) OK, records in: 5903, records dropped: 514 output_compression: NoCompression
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.355781) EVENT_LOG_v1 {"time_micros": 1759327236355767, "job": 44, "event": "compaction_finished", "compaction_time_micros": 59391, "compaction_time_cpu_micros": 34064, "output_level": 6, "num_output_files": 1, "total_output_size": 7498110, "num_input_records": 5903, "num_output_records": 5389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236356194, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327236359146, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.293525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:00:36 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:00:36.359285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:00:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:37 compute-0 nova_compute[260022]: 2025-10-01 14:00:37.367 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:38 compute-0 ceph-mon[74802]: pgmap v1731: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:40 compute-0 ceph-mon[74802]: pgmap v1732: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:42 compute-0 ceph-mon[74802]: pgmap v1733: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:44 compute-0 ceph-mon[74802]: pgmap v1734: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:45 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:45.479 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:00:45 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:45.481 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:00:46 compute-0 ceph-mon[74802]: pgmap v1735: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:46 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:00:46.483 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:47 compute-0 sudo[293404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:00:47 compute-0 sudo[293404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:47 compute-0 sudo[293404]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:47 compute-0 sudo[293429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:00:47 compute-0 sudo[293429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:47 compute-0 sudo[293429]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:47 compute-0 sudo[293454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:00:47 compute-0 sudo[293454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:47 compute-0 sudo[293454]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:47 compute-0 sudo[293479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:00:47 compute-0 sudo[293479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:00:47
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', 'backups', 'default.rgw.control', 'default.rgw.log', '.mgr', '.rgw.root']
Oct 01 14:00:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:00:48 compute-0 sudo[293479]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:00:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:00:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:00:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:00:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:00:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a372d3fe-1915-4d22-b989-f4e718869959 does not exist
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a6bc76c2-df8c-4df9-a224-5b4d6ac69046 does not exist
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 0d1bdaae-986e-4371-bfb6-c07ebbe68dca does not exist
Oct 01 14:00:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:00:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:00:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:00:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:00:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:00:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:00:48 compute-0 sudo[293537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:00:48 compute-0 sudo[293537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:48 compute-0 sudo[293537]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:00:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:00:48 compute-0 sudo[293562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:00:48 compute-0 sudo[293562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:48 compute-0 sudo[293562]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:48 compute-0 ceph-mon[74802]: pgmap v1736: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:00:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:00:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:00:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:00:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:00:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:00:48 compute-0 sudo[293587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:00:48 compute-0 sudo[293587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:48 compute-0 sudo[293587]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:48 compute-0 sudo[293612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:00:48 compute-0 sudo[293612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:48 compute-0 podman[293678]: 2025-10-01 14:00:48.87449189 +0000 UTC m=+0.069425235 container create 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:00:48 compute-0 systemd[1]: Started libpod-conmon-871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a.scope.
Oct 01 14:00:48 compute-0 podman[293678]: 2025-10-01 14:00:48.848601348 +0000 UTC m=+0.043534703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:00:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:00:48 compute-0 podman[293678]: 2025-10-01 14:00:48.975037562 +0000 UTC m=+0.169970917 container init 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 14:00:48 compute-0 podman[293678]: 2025-10-01 14:00:48.987188788 +0000 UTC m=+0.182122123 container start 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 14:00:48 compute-0 podman[293678]: 2025-10-01 14:00:48.99351807 +0000 UTC m=+0.188451415 container attach 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:00:48 compute-0 zealous_mahavira[293695]: 167 167
Oct 01 14:00:48 compute-0 systemd[1]: libpod-871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a.scope: Deactivated successfully.
Oct 01 14:00:48 compute-0 podman[293678]: 2025-10-01 14:00:48.994846531 +0000 UTC m=+0.189779896 container died 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:00:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-93e17d8a8dd5d0d80f60aa1e762772c9f33ccb5404df9cddfb70e31e5ecc0cc4-merged.mount: Deactivated successfully.
Oct 01 14:00:49 compute-0 podman[293678]: 2025-10-01 14:00:49.050938172 +0000 UTC m=+0.245871527 container remove 871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mahavira, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:00:49 compute-0 systemd[1]: libpod-conmon-871d4148f403897828fac9d0c2c5c31b76b4d29879b366c4596a4b5ae7e9be4a.scope: Deactivated successfully.
Oct 01 14:00:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:49 compute-0 podman[293720]: 2025-10-01 14:00:49.343634237 +0000 UTC m=+0.120493807 container create 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:00:49 compute-0 podman[293720]: 2025-10-01 14:00:49.264496253 +0000 UTC m=+0.041355873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:00:49 compute-0 systemd[1]: Started libpod-conmon-1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588.scope.
Oct 01 14:00:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:49 compute-0 podman[293720]: 2025-10-01 14:00:49.464521194 +0000 UTC m=+0.241380814 container init 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:00:49 compute-0 podman[293720]: 2025-10-01 14:00:49.476107792 +0000 UTC m=+0.252967342 container start 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 01 14:00:49 compute-0 podman[293720]: 2025-10-01 14:00:49.48201818 +0000 UTC m=+0.258877730 container attach 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:00:49 compute-0 nova_compute[260022]: 2025-10-01 14:00:49.758 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:00:50 compute-0 ceph-mon[74802]: pgmap v1737: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:50 compute-0 happy_jepsen[293736]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:00:50 compute-0 happy_jepsen[293736]: --> relative data size: 1.0
Oct 01 14:00:50 compute-0 happy_jepsen[293736]: --> All data devices are unavailable
Oct 01 14:00:50 compute-0 systemd[1]: libpod-1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588.scope: Deactivated successfully.
Oct 01 14:00:50 compute-0 systemd[1]: libpod-1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588.scope: Consumed 1.113s CPU time.
Oct 01 14:00:50 compute-0 podman[293765]: 2025-10-01 14:00:50.693832987 +0000 UTC m=+0.036489269 container died 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:00:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8347d169a4153be9252aa2bd2db97402d620e5cf7f9e870fe7ad519eb8f40927-merged.mount: Deactivated successfully.
Oct 01 14:00:50 compute-0 podman[293765]: 2025-10-01 14:00:50.762816958 +0000 UTC m=+0.105473200 container remove 1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:00:50 compute-0 systemd[1]: libpod-conmon-1a6a1f5752eda4f7fd52fb5da5577d0b8cf026b05c2644558679992da84fa588.scope: Deactivated successfully.
Oct 01 14:00:50 compute-0 sudo[293612]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:50 compute-0 sudo[293780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:00:50 compute-0 sudo[293780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:50 compute-0 sudo[293780]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:51 compute-0 sudo[293805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:00:51 compute-0 sudo[293805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:51 compute-0 sudo[293805]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:51 compute-0 sudo[293830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:00:51 compute-0 sudo[293830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:51 compute-0 sudo[293830]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:51 compute-0 sudo[293855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:00:51 compute-0 sudo[293855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:51 compute-0 podman[293917]: 2025-10-01 14:00:51.633932077 +0000 UTC m=+0.066811692 container create 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 14:00:51 compute-0 podman[293917]: 2025-10-01 14:00:51.60564659 +0000 UTC m=+0.038526255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:00:51 compute-0 systemd[1]: Started libpod-conmon-3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9.scope.
Oct 01 14:00:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:00:51 compute-0 podman[293917]: 2025-10-01 14:00:51.78110174 +0000 UTC m=+0.213981335 container init 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 14:00:51 compute-0 podman[293917]: 2025-10-01 14:00:51.78833565 +0000 UTC m=+0.221215255 container start 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 14:00:51 compute-0 podman[293917]: 2025-10-01 14:00:51.7930414 +0000 UTC m=+0.225920995 container attach 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct 01 14:00:51 compute-0 epic_dewdney[293972]: 167 167
Oct 01 14:00:51 compute-0 systemd[1]: libpod-3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9.scope: Deactivated successfully.
Oct 01 14:00:51 compute-0 conmon[293972]: conmon 3f14b18ebf8282e0b63d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9.scope/container/memory.events
Oct 01 14:00:51 compute-0 podman[293936]: 2025-10-01 14:00:51.796503269 +0000 UTC m=+0.096234056 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:00:51 compute-0 podman[293935]: 2025-10-01 14:00:51.797396388 +0000 UTC m=+0.100981628 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible)
Oct 01 14:00:51 compute-0 podman[293917]: 2025-10-01 14:00:51.797494101 +0000 UTC m=+0.230373726 container died 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 14:00:51 compute-0 podman[293942]: 2025-10-01 14:00:51.810787653 +0000 UTC m=+0.105937945 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 01 14:00:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-32b3c50bc0f21ae937cbc0b7849fcb712e587ac77834b67a8e3574c50fbaa183-merged.mount: Deactivated successfully.
Oct 01 14:00:51 compute-0 podman[293932]: 2025-10-01 14:00:51.829664893 +0000 UTC m=+0.146731831 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:00:51 compute-0 podman[293917]: 2025-10-01 14:00:51.842098137 +0000 UTC m=+0.274977722 container remove 3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dewdney, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:00:51 compute-0 systemd[1]: libpod-conmon-3f14b18ebf8282e0b63de8a75d7b944db5553738158af4111774b9096a4ccaf9.scope: Deactivated successfully.
Oct 01 14:00:52 compute-0 podman[294037]: 2025-10-01 14:00:52.077579484 +0000 UTC m=+0.078281647 container create 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:00:52 compute-0 systemd[1]: Started libpod-conmon-87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0.scope.
Oct 01 14:00:52 compute-0 podman[294037]: 2025-10-01 14:00:52.049480852 +0000 UTC m=+0.050183095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:00:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:52 compute-0 podman[294037]: 2025-10-01 14:00:52.194852968 +0000 UTC m=+0.195555201 container init 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:00:52 compute-0 podman[294037]: 2025-10-01 14:00:52.201435527 +0000 UTC m=+0.202137710 container start 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:00:52 compute-0 podman[294037]: 2025-10-01 14:00:52.205995961 +0000 UTC m=+0.206698194 container attach 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 14:00:52 compute-0 ceph-mon[74802]: pgmap v1738: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:52 compute-0 stupefied_kare[294053]: {
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:     "0": [
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:         {
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "devices": [
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "/dev/loop3"
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             ],
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_name": "ceph_lv0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_size": "21470642176",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "name": "ceph_lv0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "tags": {
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.cluster_name": "ceph",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.crush_device_class": "",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.encrypted": "0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.osd_id": "0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.type": "block",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.vdo": "0"
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             },
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "type": "block",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "vg_name": "ceph_vg0"
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:         }
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:     ],
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:     "1": [
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:         {
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "devices": [
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "/dev/loop4"
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             ],
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_name": "ceph_lv1",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_size": "21470642176",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "name": "ceph_lv1",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "tags": {
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.cluster_name": "ceph",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.crush_device_class": "",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.encrypted": "0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.osd_id": "1",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.type": "block",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.vdo": "0"
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             },
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "type": "block",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "vg_name": "ceph_vg1"
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:         }
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:     ],
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:     "2": [
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:         {
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "devices": [
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "/dev/loop5"
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             ],
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_name": "ceph_lv2",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_size": "21470642176",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "name": "ceph_lv2",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "tags": {
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.cluster_name": "ceph",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.crush_device_class": "",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.encrypted": "0",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.osd_id": "2",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.type": "block",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:                 "ceph.vdo": "0"
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             },
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "type": "block",
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:             "vg_name": "ceph_vg2"
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:         }
Oct 01 14:00:52 compute-0 stupefied_kare[294053]:     ]
Oct 01 14:00:52 compute-0 stupefied_kare[294053]: }
Oct 01 14:00:53 compute-0 systemd[1]: libpod-87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0.scope: Deactivated successfully.
Oct 01 14:00:53 compute-0 podman[294037]: 2025-10-01 14:00:53.012598383 +0000 UTC m=+1.013300576 container died 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:00:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-baa10183c5d3d764e003f8429807b51461fe65085261dc93198c3cebfdca5d14-merged.mount: Deactivated successfully.
Oct 01 14:00:53 compute-0 podman[294037]: 2025-10-01 14:00:53.125123046 +0000 UTC m=+1.125825209 container remove 87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 14:00:53 compute-0 systemd[1]: libpod-conmon-87a22b25afd3f1f0a7f4c160e2bda9f223fbf2a781b85cbfc18143c0d6a95dc0.scope: Deactivated successfully.
Oct 01 14:00:53 compute-0 sudo[293855]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:53 compute-0 sudo[294075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:00:53 compute-0 sudo[294075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:53 compute-0 sudo[294075]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:53 compute-0 sudo[294100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:00:53 compute-0 sudo[294100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:53 compute-0 sudo[294100]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:53 compute-0 sudo[294125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:00:53 compute-0 sudo[294125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:53 compute-0 sudo[294125]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:53 compute-0 sudo[294150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:00:53 compute-0 sudo[294150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:53 compute-0 podman[294215]: 2025-10-01 14:00:53.915741139 +0000 UTC m=+0.045631310 container create fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct 01 14:00:53 compute-0 systemd[1]: Started libpod-conmon-fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f.scope.
Oct 01 14:00:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:00:53 compute-0 podman[294215]: 2025-10-01 14:00:53.896915901 +0000 UTC m=+0.026806112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:00:54 compute-0 podman[294215]: 2025-10-01 14:00:53.999965113 +0000 UTC m=+0.129855304 container init fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 14:00:54 compute-0 podman[294215]: 2025-10-01 14:00:54.009426454 +0000 UTC m=+0.139316635 container start fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:00:54 compute-0 podman[294215]: 2025-10-01 14:00:54.012706768 +0000 UTC m=+0.142596949 container attach fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:00:54 compute-0 adoring_hodgkin[294231]: 167 167
Oct 01 14:00:54 compute-0 systemd[1]: libpod-fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f.scope: Deactivated successfully.
Oct 01 14:00:54 compute-0 podman[294215]: 2025-10-01 14:00:54.017610964 +0000 UTC m=+0.147501145 container died fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-49d9ea93023b747970eb036b70b020d9e7967872103ea75de2429770178756ee-merged.mount: Deactivated successfully.
Oct 01 14:00:54 compute-0 podman[294215]: 2025-10-01 14:00:54.063180861 +0000 UTC m=+0.193071052 container remove fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:00:54 compute-0 systemd[1]: libpod-conmon-fcd31b8994638ef2dffa7cb2f157aad7c7d84ce1324ec8aec78759168be3ad1f.scope: Deactivated successfully.
Oct 01 14:00:54 compute-0 podman[294254]: 2025-10-01 14:00:54.276123782 +0000 UTC m=+0.056116453 container create 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:00:54 compute-0 systemd[1]: Started libpod-conmon-04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b.scope.
Oct 01 14:00:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:00:54 compute-0 podman[294254]: 2025-10-01 14:00:54.260437293 +0000 UTC m=+0.040429964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:00:54 compute-0 podman[294254]: 2025-10-01 14:00:54.356218015 +0000 UTC m=+0.136210786 container init 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 14:00:54 compute-0 podman[294254]: 2025-10-01 14:00:54.362839575 +0000 UTC m=+0.142832256 container start 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 14:00:54 compute-0 podman[294254]: 2025-10-01 14:00:54.366814581 +0000 UTC m=+0.146807292 container attach 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:00:54 compute-0 ceph-mon[74802]: pgmap v1739: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:00:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1592897968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:00:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:00:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1592897968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:00:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1592897968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:00:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1592897968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:00:55 compute-0 lucid_banzai[294270]: {
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "osd_id": 0,
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "type": "bluestore"
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:     },
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "osd_id": 2,
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "type": "bluestore"
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:     },
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "osd_id": 1,
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:         "type": "bluestore"
Oct 01 14:00:55 compute-0 lucid_banzai[294270]:     }
Oct 01 14:00:55 compute-0 lucid_banzai[294270]: }
Oct 01 14:00:55 compute-0 systemd[1]: libpod-04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b.scope: Deactivated successfully.
Oct 01 14:00:55 compute-0 podman[294254]: 2025-10-01 14:00:55.464903228 +0000 UTC m=+1.244895909 container died 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 01 14:00:55 compute-0 systemd[1]: libpod-04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b.scope: Consumed 1.106s CPU time.
Oct 01 14:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-30dca3a5cff3562a00d64ed6a42a03c447059815cbfc6b164dfd9d967fd74714-merged.mount: Deactivated successfully.
Oct 01 14:00:55 compute-0 podman[294254]: 2025-10-01 14:00:55.541371746 +0000 UTC m=+1.321364457 container remove 04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 14:00:55 compute-0 systemd[1]: libpod-conmon-04532c6c59c6a9df09714ac2c1f4fd6a25d00668800bcbdb724eb9e9af82852b.scope: Deactivated successfully.
Oct 01 14:00:55 compute-0 sudo[294150]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:00:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:00:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:00:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:00:55 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f9c980b3-138f-40b4-ad1c-3b5f76afaea2 does not exist
Oct 01 14:00:55 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e3df028d-1e60-4315-b1b7-2763c49c7642 does not exist
Oct 01 14:00:55 compute-0 sudo[294316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:00:55 compute-0 sudo[294316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:55 compute-0 sudo[294316]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:55 compute-0 sudo[294341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:00:55 compute-0 sudo[294341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:00:55 compute-0 sudo[294341]: pam_unix(sudo:session): session closed for user root
Oct 01 14:00:56 compute-0 ceph-mon[74802]: pgmap v1740: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:00:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:00:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:00:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:00:58 compute-0 ceph-mon[74802]: pgmap v1741: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:00:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:00 compute-0 ceph-mon[74802]: pgmap v1742: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:01 compute-0 CROND[294367]: (root) CMD (run-parts /etc/cron.hourly)
Oct 01 14:01:01 compute-0 run-parts[294370]: (/etc/cron.hourly) starting 0anacron
Oct 01 14:01:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:01 compute-0 run-parts[294376]: (/etc/cron.hourly) finished 0anacron
Oct 01 14:01:01 compute-0 CROND[294366]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 01 14:01:02 compute-0 ceph-mon[74802]: pgmap v1743: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:03 compute-0 ceph-mon[74802]: pgmap v1744: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:06 compute-0 ceph-mon[74802]: pgmap v1745: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Oct 01 14:01:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:08 compute-0 ceph-mon[74802]: pgmap v1746: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Oct 01 14:01:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Oct 01 14:01:10 compute-0 ceph-mon[74802]: pgmap v1747: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Oct 01 14:01:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Oct 01 14:01:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:01:12.328 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:01:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:01:12.329 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:01:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:01:12.329 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:01:12 compute-0 nova_compute[260022]: 2025-10-01 14:01:12.350 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:12 compute-0 ceph-mon[74802]: pgmap v1748: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Oct 01 14:01:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:01:13 compute-0 ceph-mon[74802]: pgmap v1749: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:01:13 compute-0 nova_compute[260022]: 2025-10-01 14:01:13.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:01:16 compute-0 ceph-mon[74802]: pgmap v1750: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:01:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:01:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:01:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:01:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:01:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:01:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:01:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:01:18 compute-0 ceph-mon[74802]: pgmap v1751: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:01:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Oct 01 14:01:20 compute-0 ceph-mon[74802]: pgmap v1752: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Oct 01 14:01:20 compute-0 nova_compute[260022]: 2025-10-01 14:01:20.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:20 compute-0 nova_compute[260022]: 2025-10-01 14:01:20.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:01:20 compute-0 nova_compute[260022]: 2025-10-01 14:01:20.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:20 compute-0 nova_compute[260022]: 2025-10-01 14:01:20.369 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:01:20 compute-0 nova_compute[260022]: 2025-10-01 14:01:20.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:01:20 compute-0 nova_compute[260022]: 2025-10-01 14:01:20.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:01:20 compute-0 nova_compute[260022]: 2025-10-01 14:01:20.371 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:01:20 compute-0 nova_compute[260022]: 2025-10-01 14:01:20.371 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:01:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:01:20 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/229369265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:01:20 compute-0 nova_compute[260022]: 2025-10-01 14:01:20.814 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.037 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.039 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.039 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.039 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:01:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Oct 01 14:01:21 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/229369265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.332 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.356 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.357 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.358 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.434 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.466 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.467 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.495 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.514 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 14:01:21 compute-0 nova_compute[260022]: 2025-10-01 14:01:21.573 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:01:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:01:22 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987762859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:01:22 compute-0 nova_compute[260022]: 2025-10-01 14:01:22.032 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:01:22 compute-0 nova_compute[260022]: 2025-10-01 14:01:22.040 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:01:22 compute-0 nova_compute[260022]: 2025-10-01 14:01:22.058 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:01:22 compute-0 nova_compute[260022]: 2025-10-01 14:01:22.061 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:01:22 compute-0 nova_compute[260022]: 2025-10-01 14:01:22.062 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:01:22 compute-0 ceph-mon[74802]: pgmap v1753: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Oct 01 14:01:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1987762859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:01:22 compute-0 podman[294422]: 2025-10-01 14:01:22.551865505 +0000 UTC m=+0.093969265 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 01 14:01:22 compute-0 podman[294424]: 2025-10-01 14:01:22.573789171 +0000 UTC m=+0.109069554 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 01 14:01:22 compute-0 podman[294421]: 2025-10-01 14:01:22.577829239 +0000 UTC m=+0.121951263 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 14:01:22 compute-0 podman[294423]: 2025-10-01 14:01:22.58414424 +0000 UTC m=+0.123786752 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 01 14:01:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:23 compute-0 nova_compute[260022]: 2025-10-01 14:01:23.064 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Oct 01 14:01:24 compute-0 ceph-mon[74802]: pgmap v1754: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Oct 01 14:01:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:26 compute-0 nova_compute[260022]: 2025-10-01 14:01:26.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:26 compute-0 nova_compute[260022]: 2025-10-01 14:01:26.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:26 compute-0 ceph-mon[74802]: pgmap v1755: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:27 compute-0 nova_compute[260022]: 2025-10-01 14:01:27.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:27 compute-0 nova_compute[260022]: 2025-10-01 14:01:27.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:27 compute-0 nova_compute[260022]: 2025-10-01 14:01:27.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:01:27 compute-0 nova_compute[260022]: 2025-10-01 14:01:27.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:01:27 compute-0 nova_compute[260022]: 2025-10-01 14:01:27.359 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:01:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:28 compute-0 ceph-mon[74802]: pgmap v1756: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:30 compute-0 nova_compute[260022]: 2025-10-01 14:01:30.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:01:30 compute-0 ceph-mon[74802]: pgmap v1757: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:32 compute-0 ceph-mon[74802]: pgmap v1758: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:34 compute-0 ceph-mon[74802]: pgmap v1759: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:36 compute-0 ceph-mon[74802]: pgmap v1760: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:38 compute-0 ceph-mon[74802]: pgmap v1761: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:40 compute-0 ceph-mon[74802]: pgmap v1762: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:41 compute-0 ceph-mon[74802]: pgmap v1763: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:44 compute-0 ceph-mon[74802]: pgmap v1764: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:45 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:01:45.590 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:01:45 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:01:45.592 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:01:46 compute-0 ceph-mon[74802]: pgmap v1765: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:46 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:01:46.594 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:01:47
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.data']
Oct 01 14:01:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:01:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:01:48 compute-0 ceph-mon[74802]: pgmap v1766: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:50 compute-0 ceph-mon[74802]: pgmap v1767: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:52 compute-0 ceph-mon[74802]: pgmap v1768: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:53 compute-0 podman[294505]: 2025-10-01 14:01:53.546782183 +0000 UTC m=+0.081904631 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:01:53 compute-0 podman[294506]: 2025-10-01 14:01:53.562935346 +0000 UTC m=+0.089896785 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:01:53 compute-0 podman[294512]: 2025-10-01 14:01:53.569306598 +0000 UTC m=+0.092774517 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 14:01:53 compute-0 podman[294504]: 2025-10-01 14:01:53.575163834 +0000 UTC m=+0.121648443 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:01:54 compute-0 ceph-mon[74802]: pgmap v1769: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:01:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2419855985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:01:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:01:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2419855985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:01:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2419855985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:01:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2419855985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:01:55 compute-0 sudo[294587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:01:55 compute-0 sudo[294587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:55 compute-0 sudo[294587]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:55 compute-0 sudo[294612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:01:55 compute-0 sudo[294612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:55 compute-0 sudo[294612]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:56 compute-0 sudo[294637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:01:56 compute-0 sudo[294637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:56 compute-0 sudo[294637]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:56 compute-0 sudo[294662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:01:56 compute-0 sudo[294662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:56 compute-0 ceph-mon[74802]: pgmap v1770: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:56 compute-0 sudo[294662]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:01:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:01:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:01:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:01:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:01:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:01:56 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e380aa91-b1cb-4610-b859-513cefaef386 does not exist
Oct 01 14:01:56 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fe104ba5-041e-425a-8caa-a0be4dd9d106 does not exist
Oct 01 14:01:56 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e1c88eba-54aa-4b88-9fc6-7bdf856d4cc7 does not exist
Oct 01 14:01:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:01:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:01:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:01:56 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:01:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:01:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:01:56 compute-0 sudo[294718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:01:56 compute-0 sudo[294718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:56 compute-0 sudo[294718]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:56 compute-0 sudo[294743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:01:56 compute-0 sudo[294743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:56 compute-0 sudo[294743]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:57 compute-0 sudo[294768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:01:57 compute-0 sudo[294768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:57 compute-0 sudo[294768]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:57 compute-0 sudo[294793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:01:57 compute-0 sudo[294793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:01:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:01:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:01:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:01:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:01:57 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:01:57 compute-0 podman[294856]: 2025-10-01 14:01:57.515095754 +0000 UTC m=+0.063903020 container create af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 14:01:57 compute-0 systemd[1]: Started libpod-conmon-af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902.scope.
Oct 01 14:01:57 compute-0 podman[294856]: 2025-10-01 14:01:57.491307758 +0000 UTC m=+0.040115024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:01:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:01:57 compute-0 podman[294856]: 2025-10-01 14:01:57.631027975 +0000 UTC m=+0.179835301 container init af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:01:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:01:57 compute-0 podman[294856]: 2025-10-01 14:01:57.640172305 +0000 UTC m=+0.188979561 container start af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 14:01:57 compute-0 podman[294856]: 2025-10-01 14:01:57.644221174 +0000 UTC m=+0.193028440 container attach af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 14:01:57 compute-0 heuristic_easley[294872]: 167 167
Oct 01 14:01:57 compute-0 systemd[1]: libpod-af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902.scope: Deactivated successfully.
Oct 01 14:01:57 compute-0 podman[294856]: 2025-10-01 14:01:57.649683248 +0000 UTC m=+0.198490524 container died af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-93a2a284c748c623ce8d7c12a1b7e3f53b3985fced1a7e99b8b20f9c1d05e483-merged.mount: Deactivated successfully.
Oct 01 14:01:57 compute-0 podman[294856]: 2025-10-01 14:01:57.702646759 +0000 UTC m=+0.251454005 container remove af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:01:57 compute-0 systemd[1]: libpod-conmon-af16e438670f55a70d96f27365a4e7fec27cc02551bd7076900a15f42502e902.scope: Deactivated successfully.
Oct 01 14:01:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:01:57 compute-0 podman[294896]: 2025-10-01 14:01:57.940027077 +0000 UTC m=+0.069880780 container create 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 14:01:57 compute-0 systemd[1]: Started libpod-conmon-9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00.scope.
Oct 01 14:01:58 compute-0 podman[294896]: 2025-10-01 14:01:57.911323265 +0000 UTC m=+0.041177008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:01:58 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:01:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:01:58 compute-0 podman[294896]: 2025-10-01 14:01:58.047305063 +0000 UTC m=+0.177158816 container init 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 14:01:58 compute-0 podman[294896]: 2025-10-01 14:01:58.061015128 +0000 UTC m=+0.190868831 container start 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 14:01:58 compute-0 podman[294896]: 2025-10-01 14:01:58.068864777 +0000 UTC m=+0.198718530 container attach 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 14:01:58 compute-0 ceph-mon[74802]: pgmap v1771: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:01:59 compute-0 confident_agnesi[294913]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:01:59 compute-0 confident_agnesi[294913]: --> relative data size: 1.0
Oct 01 14:01:59 compute-0 confident_agnesi[294913]: --> All data devices are unavailable
Oct 01 14:01:59 compute-0 systemd[1]: libpod-9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00.scope: Deactivated successfully.
Oct 01 14:01:59 compute-0 systemd[1]: libpod-9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00.scope: Consumed 1.223s CPU time.
Oct 01 14:01:59 compute-0 podman[294896]: 2025-10-01 14:01:59.327013726 +0000 UTC m=+1.456867419 container died 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 14:01:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e4f7a21dd868cf1d08a9f818c5799e2239a9fd8d8736a6503ce752266ac83e1-merged.mount: Deactivated successfully.
Oct 01 14:01:59 compute-0 podman[294896]: 2025-10-01 14:01:59.434908482 +0000 UTC m=+1.564762155 container remove 9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:01:59 compute-0 systemd[1]: libpod-conmon-9677b8bbff91e1228176857caf951ccf735b28b3cf8df98677cce9d46c77fb00.scope: Deactivated successfully.
Oct 01 14:01:59 compute-0 sudo[294793]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:59 compute-0 sudo[294952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:01:59 compute-0 sudo[294952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:59 compute-0 sudo[294952]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:59 compute-0 sudo[294977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:01:59 compute-0 sudo[294977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:59 compute-0 sudo[294977]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:59 compute-0 sudo[295002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:01:59 compute-0 sudo[295002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:01:59 compute-0 sudo[295002]: pam_unix(sudo:session): session closed for user root
Oct 01 14:01:59 compute-0 sudo[295027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:01:59 compute-0 sudo[295027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:02:00 compute-0 podman[295093]: 2025-10-01 14:02:00.327009387 +0000 UTC m=+0.041948683 container create 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 14:02:00 compute-0 systemd[1]: Started libpod-conmon-837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6.scope.
Oct 01 14:02:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:02:00 compute-0 podman[295093]: 2025-10-01 14:02:00.399251442 +0000 UTC m=+0.114190808 container init 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 01 14:02:00 compute-0 podman[295093]: 2025-10-01 14:02:00.310240205 +0000 UTC m=+0.025179501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:02:00 compute-0 ceph-mon[74802]: pgmap v1772: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:00 compute-0 podman[295093]: 2025-10-01 14:02:00.408268608 +0000 UTC m=+0.123207884 container start 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:02:00 compute-0 quirky_wright[295109]: 167 167
Oct 01 14:02:00 compute-0 systemd[1]: libpod-837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6.scope: Deactivated successfully.
Oct 01 14:02:00 compute-0 podman[295093]: 2025-10-01 14:02:00.415283711 +0000 UTC m=+0.130223087 container attach 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 14:02:00 compute-0 podman[295093]: 2025-10-01 14:02:00.416273913 +0000 UTC m=+0.131213229 container died 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct 01 14:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-528963a2194888f5aaf48448b3632c7b62377b2fddfc1223354408c2ef574d46-merged.mount: Deactivated successfully.
Oct 01 14:02:00 compute-0 podman[295093]: 2025-10-01 14:02:00.458115761 +0000 UTC m=+0.173055077 container remove 837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wright, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct 01 14:02:00 compute-0 systemd[1]: libpod-conmon-837da209b4a467c7ed3956b126fcf9942ce4d40788176cc5cc4682104d2ecfd6.scope: Deactivated successfully.
Oct 01 14:02:00 compute-0 podman[295133]: 2025-10-01 14:02:00.675956158 +0000 UTC m=+0.054275015 container create 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:02:00 compute-0 systemd[1]: Started libpod-conmon-5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d.scope.
Oct 01 14:02:00 compute-0 podman[295133]: 2025-10-01 14:02:00.653801874 +0000 UTC m=+0.032120721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:02:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:02:00 compute-0 podman[295133]: 2025-10-01 14:02:00.788214552 +0000 UTC m=+0.166533459 container init 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:02:00 compute-0 podman[295133]: 2025-10-01 14:02:00.803193647 +0000 UTC m=+0.181512504 container start 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 01 14:02:00 compute-0 podman[295133]: 2025-10-01 14:02:00.807337699 +0000 UTC m=+0.185656606 container attach 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:02:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]: {
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:     "0": [
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:         {
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "devices": [
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "/dev/loop3"
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             ],
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_name": "ceph_lv0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_size": "21470642176",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "name": "ceph_lv0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "tags": {
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.cluster_name": "ceph",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.crush_device_class": "",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.encrypted": "0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.osd_id": "0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.type": "block",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.vdo": "0"
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             },
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "type": "block",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "vg_name": "ceph_vg0"
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:         }
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:     ],
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:     "1": [
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:         {
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "devices": [
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "/dev/loop4"
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             ],
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_name": "ceph_lv1",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_size": "21470642176",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "name": "ceph_lv1",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "tags": {
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.cluster_name": "ceph",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.crush_device_class": "",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.encrypted": "0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.osd_id": "1",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.type": "block",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.vdo": "0"
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             },
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "type": "block",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "vg_name": "ceph_vg1"
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:         }
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:     ],
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:     "2": [
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:         {
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "devices": [
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "/dev/loop5"
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             ],
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_name": "ceph_lv2",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_size": "21470642176",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "name": "ceph_lv2",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "tags": {
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.cluster_name": "ceph",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.crush_device_class": "",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.encrypted": "0",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.osd_id": "2",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.type": "block",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:                 "ceph.vdo": "0"
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             },
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "type": "block",
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:             "vg_name": "ceph_vg2"
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:         }
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]:     ]
Oct 01 14:02:01 compute-0 relaxed_archimedes[295150]: }
Oct 01 14:02:01 compute-0 systemd[1]: libpod-5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d.scope: Deactivated successfully.
Oct 01 14:02:01 compute-0 podman[295133]: 2025-10-01 14:02:01.611293716 +0000 UTC m=+0.989612573 container died 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad89ae6d2b09f33d0f857cc15dc3a806dc2acd3282957d0496ea7c1106b22555-merged.mount: Deactivated successfully.
Oct 01 14:02:01 compute-0 podman[295133]: 2025-10-01 14:02:01.684575463 +0000 UTC m=+1.062894290 container remove 5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:02:01 compute-0 systemd[1]: libpod-conmon-5de57eead185b319e14a370237fddefbd75c754b4a20155311d385b38f47312d.scope: Deactivated successfully.
Oct 01 14:02:01 compute-0 sudo[295027]: pam_unix(sudo:session): session closed for user root
Oct 01 14:02:01 compute-0 sudo[295171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:02:01 compute-0 sudo[295171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:02:01 compute-0 sudo[295171]: pam_unix(sudo:session): session closed for user root
Oct 01 14:02:01 compute-0 sudo[295196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:02:01 compute-0 sudo[295196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:02:01 compute-0 sudo[295196]: pam_unix(sudo:session): session closed for user root
Oct 01 14:02:01 compute-0 sudo[295221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:02:01 compute-0 sudo[295221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:02:01 compute-0 sudo[295221]: pam_unix(sudo:session): session closed for user root
Oct 01 14:02:02 compute-0 sudo[295246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:02:02 compute-0 sudo[295246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:02:02 compute-0 ceph-mon[74802]: pgmap v1773: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:02 compute-0 podman[295312]: 2025-10-01 14:02:02.511892942 +0000 UTC m=+0.071539792 container create 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:02:02 compute-0 systemd[1]: Started libpod-conmon-10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36.scope.
Oct 01 14:02:02 compute-0 podman[295312]: 2025-10-01 14:02:02.483686736 +0000 UTC m=+0.043333636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:02:02 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:02:02 compute-0 podman[295312]: 2025-10-01 14:02:02.616956308 +0000 UTC m=+0.176603168 container init 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:02:02 compute-0 podman[295312]: 2025-10-01 14:02:02.627903616 +0000 UTC m=+0.187550476 container start 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:02:02 compute-0 podman[295312]: 2025-10-01 14:02:02.63213411 +0000 UTC m=+0.191781020 container attach 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 14:02:02 compute-0 loving_zhukovsky[295328]: 167 167
Oct 01 14:02:02 compute-0 systemd[1]: libpod-10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36.scope: Deactivated successfully.
Oct 01 14:02:02 compute-0 podman[295312]: 2025-10-01 14:02:02.635445685 +0000 UTC m=+0.195092545 container died 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 01 14:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-61ceb3f6aeb8cb3308c67e57ebd9535f7d3b7d307a092bbe643ceca3bf088d49-merged.mount: Deactivated successfully.
Oct 01 14:02:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:02 compute-0 podman[295312]: 2025-10-01 14:02:02.786578104 +0000 UTC m=+0.346224964 container remove 10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_zhukovsky, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 14:02:02 compute-0 systemd[1]: libpod-conmon-10af7f2591daadd60dcad949001dae192d1be5091a0b720125de7c35d6a86d36.scope: Deactivated successfully.
Oct 01 14:02:02 compute-0 podman[295354]: 2025-10-01 14:02:02.969659367 +0000 UTC m=+0.053935173 container create 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 14:02:03 compute-0 systemd[1]: Started libpod-conmon-371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745.scope.
Oct 01 14:02:03 compute-0 podman[295354]: 2025-10-01 14:02:02.946039037 +0000 UTC m=+0.030314933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:02:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:02:03 compute-0 podman[295354]: 2025-10-01 14:02:03.082548371 +0000 UTC m=+0.166824217 container init 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:02:03 compute-0 podman[295354]: 2025-10-01 14:02:03.094393277 +0000 UTC m=+0.178669113 container start 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:02:03 compute-0 podman[295354]: 2025-10-01 14:02:03.098413895 +0000 UTC m=+0.182689761 container attach 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 14:02:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:04 compute-0 funny_wiles[295370]: {
Oct 01 14:02:04 compute-0 funny_wiles[295370]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "osd_id": 0,
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "type": "bluestore"
Oct 01 14:02:04 compute-0 funny_wiles[295370]:     },
Oct 01 14:02:04 compute-0 funny_wiles[295370]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "osd_id": 2,
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "type": "bluestore"
Oct 01 14:02:04 compute-0 funny_wiles[295370]:     },
Oct 01 14:02:04 compute-0 funny_wiles[295370]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "osd_id": 1,
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:02:04 compute-0 funny_wiles[295370]:         "type": "bluestore"
Oct 01 14:02:04 compute-0 funny_wiles[295370]:     }
Oct 01 14:02:04 compute-0 funny_wiles[295370]: }
Oct 01 14:02:04 compute-0 systemd[1]: libpod-371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745.scope: Deactivated successfully.
Oct 01 14:02:04 compute-0 podman[295354]: 2025-10-01 14:02:04.26290584 +0000 UTC m=+1.347181686 container died 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:02:04 compute-0 systemd[1]: libpod-371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745.scope: Consumed 1.179s CPU time.
Oct 01 14:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-660084ec14f123dab2a66b4a6c6eef5d7154c0b394cac0b89115326f77daf23d-merged.mount: Deactivated successfully.
Oct 01 14:02:04 compute-0 podman[295354]: 2025-10-01 14:02:04.338324175 +0000 UTC m=+1.422600021 container remove 371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:02:04 compute-0 systemd[1]: libpod-conmon-371991a56cad0981e2fff4b187ab166fc77a776f82892935210ffad6746b0745.scope: Deactivated successfully.
Oct 01 14:02:04 compute-0 sudo[295246]: pam_unix(sudo:session): session closed for user root
Oct 01 14:02:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:02:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:02:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:02:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:02:04 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 0bda8f74-d870-4364-a23b-99e0276717ed does not exist
Oct 01 14:02:04 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7097a772-e1e7-4cbb-b42c-72deffc99bac does not exist
Oct 01 14:02:04 compute-0 ceph-mon[74802]: pgmap v1774: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:04 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:02:04 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:02:04 compute-0 sudo[295417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:02:04 compute-0 sudo[295417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:02:04 compute-0 sudo[295417]: pam_unix(sudo:session): session closed for user root
Oct 01 14:02:04 compute-0 sudo[295442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:02:04 compute-0 sudo[295442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:02:04 compute-0 sudo[295442]: pam_unix(sudo:session): session closed for user root
Oct 01 14:02:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:06 compute-0 ceph-mon[74802]: pgmap v1775: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:07 compute-0 ceph-mon[74802]: pgmap v1776: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:10 compute-0 ceph-mon[74802]: pgmap v1777: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:12.329 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:02:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:02:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:12.332 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:02:12 compute-0 ceph-mon[74802]: pgmap v1778: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:13 compute-0 nova_compute[260022]: 2025-10-01 14:02:13.348 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:14 compute-0 ceph-mon[74802]: pgmap v1779: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:16 compute-0 ceph-mon[74802]: pgmap v1780: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:02:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:02:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:02:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:02:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:02:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:02:18 compute-0 ceph-mon[74802]: pgmap v1781: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:20 compute-0 ceph-mon[74802]: pgmap v1782: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.485 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.485 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.486 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.486 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.486 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:02:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:02:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3804693010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:02:21 compute-0 nova_compute[260022]: 2025-10-01 14:02:21.953 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:02:21 compute-0 ceph-mon[74802]: pgmap v1783: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.172 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.174 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.175 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.176 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.359 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.393 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.394 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.394 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.448 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:02:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:02:22 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242345982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.925 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.930 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.943 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.944 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:02:22 compute-0 nova_compute[260022]: 2025-10-01 14:02:22.944 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:02:23 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3804693010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:02:23 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3242345982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:02:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:24 compute-0 ceph-mon[74802]: pgmap v1784: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:24 compute-0 podman[295513]: 2025-10-01 14:02:24.516641316 +0000 UTC m=+0.065483911 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 01 14:02:24 compute-0 podman[295514]: 2025-10-01 14:02:24.535516655 +0000 UTC m=+0.070647473 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 01 14:02:24 compute-0 podman[295515]: 2025-10-01 14:02:24.54037814 +0000 UTC m=+0.085350442 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 01 14:02:24 compute-0 podman[295512]: 2025-10-01 14:02:24.551437621 +0000 UTC m=+0.099973276 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:02:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:25 compute-0 ceph-mon[74802]: pgmap v1785: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:27 compute-0 ceph-mon[74802]: pgmap v1786: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:28 compute-0 nova_compute[260022]: 2025-10-01 14:02:28.941 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:28 compute-0 nova_compute[260022]: 2025-10-01 14:02:28.941 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:28 compute-0 nova_compute[260022]: 2025-10-01 14:02:28.942 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:02:28 compute-0 nova_compute[260022]: 2025-10-01 14:02:28.942 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:02:28 compute-0 nova_compute[260022]: 2025-10-01 14:02:28.980 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:02:28 compute-0 nova_compute[260022]: 2025-10-01 14:02:28.981 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:28 compute-0 nova_compute[260022]: 2025-10-01 14:02:28.981 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:29 compute-0 ceph-mon[74802]: pgmap v1787: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:30.905 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:48:e1 10.100.0.2 2001:db8::f816:3eff:fe14:48e1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe14:48e1/64', 'neutron:device_id': 'ovnmeta-83553c01-35f0-4f4a-9abd-9fde4d9e3ae3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83553c01-35f0-4f4a-9abd-9fde4d9e3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bf38cee-9f0a-4197-9a6f-788e9a83e343, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=08585e9b-5812-4d5d-a480-669a92c443db) old=Port_Binding(mac=['fa:16:3e:14:48:e1 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-83553c01-35f0-4f4a-9abd-9fde4d9e3ae3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83553c01-35f0-4f4a-9abd-9fde4d9e3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:02:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:30.906 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 08585e9b-5812-4d5d-a480-669a92c443db in datapath 83553c01-35f0-4f4a-9abd-9fde4d9e3ae3 updated
Oct 01 14:02:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:30.907 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 83553c01-35f0-4f4a-9abd-9fde4d9e3ae3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 14:02:30 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:30.908 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[922955a2-bada-4761-8583-24e22deeda9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 14:02:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:31 compute-0 nova_compute[260022]: 2025-10-01 14:02:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:32 compute-0 ceph-mon[74802]: pgmap v1788: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:34 compute-0 ceph-mon[74802]: pgmap v1789: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:36 compute-0 ceph-mon[74802]: pgmap v1790: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:37 compute-0 ceph-mon[74802]: pgmap v1791: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:40 compute-0 ceph-mon[74802]: pgmap v1792: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:40 compute-0 nova_compute[260022]: 2025-10-01 14:02:40.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:02:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:42 compute-0 ceph-mon[74802]: pgmap v1793: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:44 compute-0 ceph-mon[74802]: pgmap v1794: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:45 compute-0 ceph-mon[74802]: pgmap v1795: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:46 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:46.778 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:02:46 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:46.782 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:02:47
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'vms', '.rgw.root', 'volumes', 'backups', 'default.rgw.log']
Oct 01 14:02:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:02:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:02:48 compute-0 ceph-mon[74802]: pgmap v1796: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:49 compute-0 ceph-mon[74802]: pgmap v1797: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:52 compute-0 ceph-mon[74802]: pgmap v1798: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:53 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:02:53.785 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:02:54 compute-0 ceph-mon[74802]: pgmap v1799: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:02:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2266405777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:02:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:02:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2266405777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:02:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2266405777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:02:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2266405777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:02:55 compute-0 podman[295593]: 2025-10-01 14:02:55.508448891 +0000 UTC m=+0.063221349 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd)
Oct 01 14:02:55 compute-0 podman[295594]: 2025-10-01 14:02:55.514276426 +0000 UTC m=+0.065130950 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:02:55 compute-0 podman[295592]: 2025-10-01 14:02:55.534366064 +0000 UTC m=+0.094561384 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 01 14:02:55 compute-0 podman[295601]: 2025-10-01 14:02:55.540398745 +0000 UTC m=+0.077555294 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 14:02:56 compute-0 ceph-mon[74802]: pgmap v1800: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:02:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:02:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:02:58 compute-0 ceph-mon[74802]: pgmap v1801: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:02:59 compute-0 ceph-mon[74802]: pgmap v1802: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:02 compute-0 ceph-mon[74802]: pgmap v1803: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:04 compute-0 ceph-mon[74802]: pgmap v1804: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:04 compute-0 sudo[295674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:04 compute-0 sudo[295674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:04 compute-0 sudo[295674]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:04 compute-0 sudo[295699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:03:04 compute-0 sudo[295699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:04 compute-0 sudo[295699]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:04 compute-0 sudo[295724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:04 compute-0 sudo[295724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:04 compute-0 sudo[295724]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:04 compute-0 sudo[295749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 14:03:04 compute-0 sudo[295749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:05 compute-0 sudo[295749]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:03:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:03:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:05 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:05 compute-0 sudo[295796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:05 compute-0 sudo[295796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:05 compute-0 sudo[295796]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:05 compute-0 sudo[295821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:03:05 compute-0 sudo[295821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:05 compute-0 sudo[295821]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:05 compute-0 sudo[295846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:05 compute-0 sudo[295846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:05 compute-0 sudo[295846]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:05 compute-0 sudo[295871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:03:05 compute-0 sudo[295871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:06 compute-0 sudo[295871]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 14:03:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 14:03:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:03:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:03:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:03:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:03:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:03:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ccd67441-5b3f-4df8-98dd-6c17159a0637 does not exist
Oct 01 14:03:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a03f3085-caee-4548-85b8-1b9f99a96015 does not exist
Oct 01 14:03:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3ac48bb6-59ac-428b-b96c-288ac6137178 does not exist
Oct 01 14:03:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:03:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:03:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:03:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:03:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:03:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:03:06 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:06 compute-0 ceph-mon[74802]: pgmap v1805: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:06 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:06 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 14:03:06 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:03:06 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:03:06 compute-0 sudo[295928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:06 compute-0 sudo[295928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:06 compute-0 sudo[295928]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:06 compute-0 sudo[295953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:03:06 compute-0 sudo[295953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:06 compute-0 sudo[295953]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:06 compute-0 sudo[295978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:06 compute-0 sudo[295978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:06 compute-0 sudo[295978]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:06 compute-0 sudo[296003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:03:06 compute-0 sudo[296003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:07 compute-0 podman[296068]: 2025-10-01 14:03:06.986443809 +0000 UTC m=+0.020591285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:03:07 compute-0 podman[296068]: 2025-10-01 14:03:07.104682313 +0000 UTC m=+0.138829779 container create a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:03:07 compute-0 systemd[1]: Started libpod-conmon-a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a.scope.
Oct 01 14:03:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:03:07 compute-0 podman[296068]: 2025-10-01 14:03:07.424838039 +0000 UTC m=+0.458985555 container init a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:03:07 compute-0 podman[296068]: 2025-10-01 14:03:07.438667148 +0000 UTC m=+0.472814614 container start a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:03:07 compute-0 ecstatic_rubin[296084]: 167 167
Oct 01 14:03:07 compute-0 systemd[1]: libpod-a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a.scope: Deactivated successfully.
Oct 01 14:03:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:03:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:03:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:03:07 compute-0 podman[296068]: 2025-10-01 14:03:07.563047217 +0000 UTC m=+0.597194683 container attach a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 14:03:07 compute-0 podman[296068]: 2025-10-01 14:03:07.564351118 +0000 UTC m=+0.598498584 container died a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:03:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff4c9c78f6de8d5b03db5ec1f8acdfd5e30d2a47dc3ab93a1c58e2f45327131d-merged.mount: Deactivated successfully.
Oct 01 14:03:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:07 compute-0 podman[296068]: 2025-10-01 14:03:07.920475347 +0000 UTC m=+0.954622813 container remove a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:03:07 compute-0 systemd[1]: libpod-conmon-a510e4478277fd50d6246b7970b448774d35418d00cca99ddbfa11e43e332f5a.scope: Deactivated successfully.
Oct 01 14:03:08 compute-0 podman[296109]: 2025-10-01 14:03:08.198873457 +0000 UTC m=+0.096263009 container create 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 14:03:08 compute-0 podman[296109]: 2025-10-01 14:03:08.14353988 +0000 UTC m=+0.040929492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:03:08 compute-0 systemd[1]: Started libpod-conmon-3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f.scope.
Oct 01 14:03:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:08 compute-0 podman[296109]: 2025-10-01 14:03:08.437496793 +0000 UTC m=+0.334886405 container init 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:03:08 compute-0 podman[296109]: 2025-10-01 14:03:08.450365151 +0000 UTC m=+0.347754703 container start 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct 01 14:03:08 compute-0 podman[296109]: 2025-10-01 14:03:08.520470057 +0000 UTC m=+0.417859669 container attach 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 14:03:08 compute-0 ceph-mon[74802]: pgmap v1806: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:09 compute-0 festive_gould[296125]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:03:09 compute-0 festive_gould[296125]: --> relative data size: 1.0
Oct 01 14:03:09 compute-0 festive_gould[296125]: --> All data devices are unavailable
Oct 01 14:03:09 compute-0 systemd[1]: libpod-3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f.scope: Deactivated successfully.
Oct 01 14:03:09 compute-0 systemd[1]: libpod-3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f.scope: Consumed 1.026s CPU time.
Oct 01 14:03:09 compute-0 podman[296109]: 2025-10-01 14:03:09.529073553 +0000 UTC m=+1.426463145 container died 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:03:09 compute-0 ceph-mon[74802]: pgmap v1807: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-59094d9b0320cc3a80b5b37601734338a7968351c1672065580e88403033b1d9-merged.mount: Deactivated successfully.
Oct 01 14:03:09 compute-0 podman[296109]: 2025-10-01 14:03:09.928887518 +0000 UTC m=+1.826277040 container remove 3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gould, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:03:09 compute-0 systemd[1]: libpod-conmon-3c7fb9f70629e3bc270ac423df68435accd7686826e2f0000a66e422e794d94f.scope: Deactivated successfully.
Oct 01 14:03:09 compute-0 sudo[296003]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:10 compute-0 sudo[296168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:10 compute-0 sudo[296168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:10 compute-0 sudo[296168]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:10 compute-0 sudo[296193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:03:10 compute-0 sudo[296193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:10 compute-0 sudo[296193]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:10 compute-0 sudo[296218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:10 compute-0 sudo[296218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:10 compute-0 sudo[296218]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:10 compute-0 sudo[296243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:03:10 compute-0 sudo[296243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:10 compute-0 podman[296307]: 2025-10-01 14:03:10.687618269 +0000 UTC m=+0.053622944 container create 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:03:10 compute-0 systemd[1]: Started libpod-conmon-42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207.scope.
Oct 01 14:03:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:03:10 compute-0 podman[296307]: 2025-10-01 14:03:10.662312156 +0000 UTC m=+0.028316901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:03:10 compute-0 podman[296307]: 2025-10-01 14:03:10.770049256 +0000 UTC m=+0.136054011 container init 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 14:03:10 compute-0 podman[296307]: 2025-10-01 14:03:10.781915053 +0000 UTC m=+0.147919718 container start 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:03:10 compute-0 podman[296307]: 2025-10-01 14:03:10.785948541 +0000 UTC m=+0.151953236 container attach 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:03:10 compute-0 pedantic_solomon[296323]: 167 167
Oct 01 14:03:10 compute-0 systemd[1]: libpod-42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207.scope: Deactivated successfully.
Oct 01 14:03:10 compute-0 conmon[296323]: conmon 42fcb877a006e778629c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207.scope/container/memory.events
Oct 01 14:03:10 compute-0 podman[296307]: 2025-10-01 14:03:10.791982473 +0000 UTC m=+0.157987128 container died 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a93008880adc9edede4d2a1de99bf57c903728149dfdbd4d46859a86c11602b-merged.mount: Deactivated successfully.
Oct 01 14:03:10 compute-0 podman[296307]: 2025-10-01 14:03:10.836981152 +0000 UTC m=+0.202985807 container remove 42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_solomon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:03:10 compute-0 systemd[1]: libpod-conmon-42fcb877a006e778629cd37a70508287d7461be37aa73f74e0eb1753a487f207.scope: Deactivated successfully.
Oct 01 14:03:11 compute-0 podman[296345]: 2025-10-01 14:03:11.034109451 +0000 UTC m=+0.068985761 container create f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 14:03:11 compute-0 systemd[1]: Started libpod-conmon-f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8.scope.
Oct 01 14:03:11 compute-0 podman[296345]: 2025-10-01 14:03:11.007414593 +0000 UTC m=+0.042290953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:03:11 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:11 compute-0 podman[296345]: 2025-10-01 14:03:11.160662469 +0000 UTC m=+0.195538819 container init f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:03:11 compute-0 podman[296345]: 2025-10-01 14:03:11.1742463 +0000 UTC m=+0.209122610 container start f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 14:03:11 compute-0 podman[296345]: 2025-10-01 14:03:11.178391442 +0000 UTC m=+0.213267812 container attach f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct 01 14:03:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]: {
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:     "0": [
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:         {
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "devices": [
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "/dev/loop3"
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             ],
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_name": "ceph_lv0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_size": "21470642176",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "name": "ceph_lv0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "tags": {
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.cluster_name": "ceph",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.crush_device_class": "",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.encrypted": "0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.osd_id": "0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.type": "block",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.vdo": "0"
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             },
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "type": "block",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "vg_name": "ceph_vg0"
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:         }
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:     ],
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:     "1": [
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:         {
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "devices": [
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "/dev/loop4"
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             ],
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_name": "ceph_lv1",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_size": "21470642176",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "name": "ceph_lv1",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "tags": {
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.cluster_name": "ceph",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.crush_device_class": "",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.encrypted": "0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.osd_id": "1",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.type": "block",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.vdo": "0"
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             },
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "type": "block",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "vg_name": "ceph_vg1"
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:         }
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:     ],
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:     "2": [
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:         {
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "devices": [
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "/dev/loop5"
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             ],
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_name": "ceph_lv2",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_size": "21470642176",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "name": "ceph_lv2",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "tags": {
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.cluster_name": "ceph",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.crush_device_class": "",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.encrypted": "0",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.osd_id": "2",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.type": "block",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:                 "ceph.vdo": "0"
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             },
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "type": "block",
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:             "vg_name": "ceph_vg2"
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:         }
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]:     ]
Oct 01 14:03:11 compute-0 beautiful_hoover[296361]: }
Oct 01 14:03:11 compute-0 systemd[1]: libpod-f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8.scope: Deactivated successfully.
Oct 01 14:03:11 compute-0 podman[296345]: 2025-10-01 14:03:11.975699108 +0000 UTC m=+1.010575388 container died f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:03:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3d6b8080f723010459f5ca3c7022b6be789a8c5d2d2164d1efd15c332cbdc1e-merged.mount: Deactivated successfully.
Oct 01 14:03:12 compute-0 podman[296345]: 2025-10-01 14:03:12.031054956 +0000 UTC m=+1.065931236 container remove f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hoover, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:03:12 compute-0 systemd[1]: libpod-conmon-f01c15c8efe80aaaad3c65e55b1974f31af6c2d30232842698cd8b7ed10c3ad8.scope: Deactivated successfully.
Oct 01 14:03:12 compute-0 sudo[296243]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:12 compute-0 sudo[296383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:12 compute-0 sudo[296383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:12 compute-0 sudo[296383]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:12 compute-0 sudo[296408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:03:12 compute-0 sudo[296408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:12 compute-0 sudo[296408]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:12 compute-0 sudo[296433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:12 compute-0 sudo[296433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:12 compute-0 sudo[296433]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:12.330 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:03:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:03:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:03:12 compute-0 ceph-mon[74802]: pgmap v1808: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:12 compute-0 sudo[296458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:03:12 compute-0 sudo[296458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:12 compute-0 podman[296524]: 2025-10-01 14:03:12.759709482 +0000 UTC m=+0.057035883 container create 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 14:03:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:12 compute-0 systemd[1]: Started libpod-conmon-4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f.scope.
Oct 01 14:03:12 compute-0 podman[296524]: 2025-10-01 14:03:12.733413466 +0000 UTC m=+0.030739907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:03:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:03:12 compute-0 podman[296524]: 2025-10-01 14:03:12.862925899 +0000 UTC m=+0.160252290 container init 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 14:03:12 compute-0 podman[296524]: 2025-10-01 14:03:12.868495546 +0000 UTC m=+0.165821897 container start 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:03:12 compute-0 podman[296524]: 2025-10-01 14:03:12.872057489 +0000 UTC m=+0.169383920 container attach 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:03:12 compute-0 elegant_franklin[296540]: 167 167
Oct 01 14:03:12 compute-0 systemd[1]: libpod-4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f.scope: Deactivated successfully.
Oct 01 14:03:12 compute-0 podman[296524]: 2025-10-01 14:03:12.877538763 +0000 UTC m=+0.174865154 container died 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 14:03:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-443c39f2b96bfa6182a6ca802699b0e25d391a5d5c192e4db60b3a1c340be422-merged.mount: Deactivated successfully.
Oct 01 14:03:12 compute-0 podman[296524]: 2025-10-01 14:03:12.928401697 +0000 UTC m=+0.225728078 container remove 4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 14:03:12 compute-0 systemd[1]: libpod-conmon-4e6d814c7920f3d998250e57fb6e2b32521c4637b369c9ff960bec56181b0e1f.scope: Deactivated successfully.
Oct 01 14:03:13 compute-0 podman[296564]: 2025-10-01 14:03:13.123326437 +0000 UTC m=+0.056313579 container create 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 14:03:13 compute-0 systemd[1]: Started libpod-conmon-2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6.scope.
Oct 01 14:03:13 compute-0 podman[296564]: 2025-10-01 14:03:13.094368677 +0000 UTC m=+0.027355869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:03:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:03:13 compute-0 podman[296564]: 2025-10-01 14:03:13.241962084 +0000 UTC m=+0.174949266 container init 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 01 14:03:13 compute-0 podman[296564]: 2025-10-01 14:03:13.25758744 +0000 UTC m=+0.190574572 container start 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 14:03:13 compute-0 podman[296564]: 2025-10-01 14:03:13.261440812 +0000 UTC m=+0.194427914 container attach 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 14:03:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:14 compute-0 keen_hugle[296581]: {
Oct 01 14:03:14 compute-0 keen_hugle[296581]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "osd_id": 0,
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "type": "bluestore"
Oct 01 14:03:14 compute-0 keen_hugle[296581]:     },
Oct 01 14:03:14 compute-0 keen_hugle[296581]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "osd_id": 2,
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "type": "bluestore"
Oct 01 14:03:14 compute-0 keen_hugle[296581]:     },
Oct 01 14:03:14 compute-0 keen_hugle[296581]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "osd_id": 1,
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:03:14 compute-0 keen_hugle[296581]:         "type": "bluestore"
Oct 01 14:03:14 compute-0 keen_hugle[296581]:     }
Oct 01 14:03:14 compute-0 keen_hugle[296581]: }
Oct 01 14:03:14 compute-0 systemd[1]: libpod-2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6.scope: Deactivated successfully.
Oct 01 14:03:14 compute-0 systemd[1]: libpod-2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6.scope: Consumed 1.055s CPU time.
Oct 01 14:03:14 compute-0 ceph-mon[74802]: pgmap v1809: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:14 compute-0 podman[296614]: 2025-10-01 14:03:14.35989158 +0000 UTC m=+0.034701872 container died 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 14:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a5cfdcb707a2fe826c6c525a9dd74f9029b60a4bed30abc85503a509be5a261-merged.mount: Deactivated successfully.
Oct 01 14:03:14 compute-0 podman[296614]: 2025-10-01 14:03:14.432473155 +0000 UTC m=+0.107283417 container remove 2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hugle, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 14:03:14 compute-0 systemd[1]: libpod-conmon-2ddd06a2250b79f625c3466c9efa01f112632230a76576c151d5fd73b88f02f6.scope: Deactivated successfully.
Oct 01 14:03:14 compute-0 sudo[296458]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:03:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:03:14 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:14 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev bc77edc4-fd59-4386-aede-f8a509f44582 does not exist
Oct 01 14:03:14 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2b625305-b996-43b3-8f82-a85906c6b229 does not exist
Oct 01 14:03:14 compute-0 sudo[296629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:03:14 compute-0 sudo[296629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:14 compute-0 sudo[296629]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:14 compute-0 sudo[296654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:03:14 compute-0 sudo[296654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:03:14 compute-0 sudo[296654]: pam_unix(sudo:session): session closed for user root
Oct 01 14:03:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:15 compute-0 nova_compute[260022]: 2025-10-01 14:03:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:03:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:15 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:03:15 compute-0 ceph-mon[74802]: pgmap v1810: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:03:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:03:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:03:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:03:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:03:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:03:18 compute-0 ceph-mon[74802]: pgmap v1811: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:20 compute-0 ceph-mon[74802]: pgmap v1812: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:21 compute-0 nova_compute[260022]: 2025-10-01 14:03:21.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:03:21 compute-0 nova_compute[260022]: 2025-10-01 14:03:21.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:03:21 compute-0 nova_compute[260022]: 2025-10-01 14:03:21.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:03:21 compute-0 nova_compute[260022]: 2025-10-01 14:03:21.433 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:03:21 compute-0 nova_compute[260022]: 2025-10-01 14:03:21.433 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:03:21 compute-0 nova_compute[260022]: 2025-10-01 14:03:21.434 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:03:21 compute-0 nova_compute[260022]: 2025-10-01 14:03:21.434 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:03:21 compute-0 nova_compute[260022]: 2025-10-01 14:03:21.435 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:03:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:03:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190491000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:03:21 compute-0 nova_compute[260022]: 2025-10-01 14:03:21.912 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.079 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.080 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5044MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.080 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.081 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.228 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.293 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.293 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.294 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.363 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:03:22 compute-0 ceph-mon[74802]: pgmap v1813: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3190491000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:03:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:03:22 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019550469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.860 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.866 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.897 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.899 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:03:22 compute-0 nova_compute[260022]: 2025-10-01 14:03:22.900 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:03:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:23 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1019550469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:03:24 compute-0 ceph-mon[74802]: pgmap v1814: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:24 compute-0 nova_compute[260022]: 2025-10-01 14:03:24.901 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:03:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:26 compute-0 ceph-mon[74802]: pgmap v1815: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:26 compute-0 podman[296726]: 2025-10-01 14:03:26.532661981 +0000 UTC m=+0.066440391 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 01 14:03:26 compute-0 podman[296724]: 2025-10-01 14:03:26.542216915 +0000 UTC m=+0.081634324 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:03:26 compute-0 podman[296725]: 2025-10-01 14:03:26.542616256 +0000 UTC m=+0.074312830 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 01 14:03:26 compute-0 podman[296723]: 2025-10-01 14:03:26.573837638 +0000 UTC m=+0.119729153 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:03:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:27 compute-0 nova_compute[260022]: 2025-10-01 14:03:27.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:03:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:28 compute-0 nova_compute[260022]: 2025-10-01 14:03:28.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:03:28 compute-0 ceph-mon[74802]: pgmap v1816: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:29 compute-0 nova_compute[260022]: 2025-10-01 14:03:29.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:03:29 compute-0 nova_compute[260022]: 2025-10-01 14:03:29.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:03:29 compute-0 nova_compute[260022]: 2025-10-01 14:03:29.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:03:29 compute-0 nova_compute[260022]: 2025-10-01 14:03:29.503 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:03:29 compute-0 nova_compute[260022]: 2025-10-01 14:03:29.503 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:03:29 compute-0 ceph-mon[74802]: pgmap v1817: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:31 compute-0 nova_compute[260022]: 2025-10-01 14:03:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:03:32 compute-0 ceph-mon[74802]: pgmap v1818: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:34 compute-0 ceph-mon[74802]: pgmap v1819: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:34 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:34.380 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:03:34 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:34.381 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:03:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:36 compute-0 ceph-mon[74802]: pgmap v1820: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:37 compute-0 ceph-mon[74802]: pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:39 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:39.384 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:03:40 compute-0 ceph-mon[74802]: pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:41 compute-0 ceph-mon[74802]: pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:43.488 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:f3:83 2001:db8:0:1:f816:3eff:fe7a:f383 2001:db8::f816:3eff:fe7a:f383'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7a:f383/64 2001:db8::f816:3eff:fe7a:f383/64', 'neutron:device_id': 'ovnmeta-63c03399-cac3-4361-81d6-fd2f133d14dc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63c03399-cac3-4361-81d6-fd2f133d14dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7aeb023-eb42-4942-80f5-14a39f62d9bf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=75887985-9cf4-4a14-8823-578c8c134e7d) old=Port_Binding(mac=['fa:16:3e:7a:f3:83 2001:db8::f816:3eff:fe7a:f383'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7a:f383/64', 'neutron:device_id': 'ovnmeta-63c03399-cac3-4361-81d6-fd2f133d14dc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63c03399-cac3-4361-81d6-fd2f133d14dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:03:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:43.489 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 75887985-9cf4-4a14-8823-578c8c134e7d in datapath 63c03399-cac3-4361-81d6-fd2f133d14dc updated
Oct 01 14:03:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:43.490 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 63c03399-cac3-4361-81d6-fd2f133d14dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 14:03:43 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:03:43.492 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cbdf0f-2030-41c1-842e-30bbc8a1961f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 14:03:44 compute-0 ceph-mon[74802]: pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:46 compute-0 ceph-mon[74802]: pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:03:47
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'images', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Oct 01 14:03:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:03:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:03:48 compute-0 ceph-mon[74802]: pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:50 compute-0 ceph-mon[74802]: pgmap v1827: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:52 compute-0 ceph-mon[74802]: pgmap v1828: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:54 compute-0 ceph-mon[74802]: pgmap v1829: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:03:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3897872834' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:03:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:03:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3897872834' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:03:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3897872834' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:03:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3897872834' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:03:56 compute-0 ceph-mon[74802]: pgmap v1830: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:57 compute-0 podman[296802]: 2025-10-01 14:03:57.508796994 +0000 UTC m=+0.058535769 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:03:57 compute-0 podman[296809]: 2025-10-01 14:03:57.52531087 +0000 UTC m=+0.061207316 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 01 14:03:57 compute-0 podman[296801]: 2025-10-01 14:03:57.530085361 +0000 UTC m=+0.083292476 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:03:57 compute-0 podman[296803]: 2025-10-01 14:03:57.557512722 +0000 UTC m=+0.096471664 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 01 14:03:57 compute-0 ceph-mon[74802]: pgmap v1831: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:03:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:03:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:03:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:00 compute-0 ceph-mon[74802]: pgmap v1832: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:02 compute-0 ceph-mon[74802]: pgmap v1833: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:03 compute-0 ceph-mon[74802]: pgmap v1834: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:06 compute-0 ceph-mon[74802]: pgmap v1835: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:07 compute-0 ceph-mon[74802]: pgmap v1836: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:10 compute-0 ceph-mon[74802]: pgmap v1837: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:12.330 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:04:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:04:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:04:12 compute-0 ceph-mon[74802]: pgmap v1838: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:13.693 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:ee:5b 2001:db8:0:1:f816:3eff:fe0f:ee5b 2001:db8::f816:3eff:fe0f:ee5b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe0f:ee5b/64 2001:db8::f816:3eff:fe0f:ee5b/64', 'neutron:device_id': 'ovnmeta-1d6028c0-c737-4798-8468-d69b94cf6fb7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d6028c0-c737-4798-8468-d69b94cf6fb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cfde4a39-5828-4f9a-8a92-23d6b4d71d7c, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09c983f0-ec35-40f6-b974-4b6581d9c9e3) old=Port_Binding(mac=['fa:16:3e:0f:ee:5b 2001:db8::f816:3eff:fe0f:ee5b'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe0f:ee5b/64', 'neutron:device_id': 'ovnmeta-1d6028c0-c737-4798-8468-d69b94cf6fb7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d6028c0-c737-4798-8468-d69b94cf6fb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:04:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:13.695 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09c983f0-ec35-40f6-b974-4b6581d9c9e3 in datapath 1d6028c0-c737-4798-8468-d69b94cf6fb7 updated
Oct 01 14:04:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:13.697 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d6028c0-c737-4798-8468-d69b94cf6fb7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 14:04:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:13.698 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[475ece58-d0e1-41cc-851c-74693663746b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 14:04:13 compute-0 ceph-mon[74802]: pgmap v1839: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:14 compute-0 sudo[296879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:14 compute-0 sudo[296879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:14 compute-0 sudo[296879]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:14 compute-0 sudo[296904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:04:14 compute-0 sudo[296904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:14 compute-0 sudo[296904]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:14 compute-0 sudo[296929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:14 compute-0 sudo[296929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:14 compute-0 sudo[296929]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:14 compute-0 sudo[296954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 14:04:14 compute-0 sudo[296954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:15 compute-0 nova_compute[260022]: 2025-10-01 14:04:15.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:15 compute-0 podman[297052]: 2025-10-01 14:04:15.604694713 +0000 UTC m=+0.173568062 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 14:04:15 compute-0 podman[297073]: 2025-10-01 14:04:15.870986409 +0000 UTC m=+0.079291529 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:04:15 compute-0 podman[297052]: 2025-10-01 14:04:15.932300486 +0000 UTC m=+0.501173845 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 14:04:16 compute-0 ceph-mon[74802]: pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:17 compute-0 sudo[296954]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:04:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:04:17 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:17 compute-0 sudo[297214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:17 compute-0 sudo[297214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:17 compute-0 sudo[297214]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:17 compute-0 sudo[297239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:04:17 compute-0 sudo[297239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:17 compute-0 sudo[297239]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:17 compute-0 sudo[297264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:17 compute-0 sudo[297264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:17 compute-0 sudo[297264]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:17 compute-0 sudo[297289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:04:17 compute-0 sudo[297289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:04:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:04:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:04:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:04:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:04:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:04:18 compute-0 sudo[297289]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:04:18 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:04:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:04:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:04:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:04:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 34bc4630-8a73-4f66-946a-70e2281fdadf does not exist
Oct 01 14:04:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9188ac65-de96-49ac-aeea-998141cf8c54 does not exist
Oct 01 14:04:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev efac1cd4-e74f-4f4e-9b3d-4b1b4c4b0d47 does not exist
Oct 01 14:04:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:04:18 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:04:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:04:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:04:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:04:18 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:04:18 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:18 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:18 compute-0 ceph-mon[74802]: pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:18 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:04:18 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:04:18 compute-0 sudo[297347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:18 compute-0 sudo[297347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:18 compute-0 sudo[297347]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:18 compute-0 sudo[297372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:04:18 compute-0 sudo[297372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:18 compute-0 sudo[297372]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:18 compute-0 sudo[297397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:18 compute-0 sudo[297397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:18 compute-0 sudo[297397]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:18 compute-0 sudo[297422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:04:18 compute-0 sudo[297422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:18 compute-0 podman[297488]: 2025-10-01 14:04:18.834198846 +0000 UTC m=+0.032512113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:04:19 compute-0 podman[297488]: 2025-10-01 14:04:19.259254282 +0000 UTC m=+0.457567449 container create a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 14:04:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:04:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:04:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:04:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:19 compute-0 systemd[1]: Started libpod-conmon-a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c.scope.
Oct 01 14:04:19 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:04:19 compute-0 podman[297488]: 2025-10-01 14:04:19.867485754 +0000 UTC m=+1.065799021 container init a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 14:04:19 compute-0 podman[297488]: 2025-10-01 14:04:19.879061772 +0000 UTC m=+1.077374989 container start a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:04:19 compute-0 vibrant_mayer[297504]: 167 167
Oct 01 14:04:19 compute-0 systemd[1]: libpod-a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c.scope: Deactivated successfully.
Oct 01 14:04:19 compute-0 podman[297488]: 2025-10-01 14:04:19.950542451 +0000 UTC m=+1.148855658 container attach a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:04:19 compute-0 podman[297488]: 2025-10-01 14:04:19.951930626 +0000 UTC m=+1.150243843 container died a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 14:04:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4fa94b59ee3324036bfb21683f43046989f1a7931af88f2803eb4b3991b127f-merged.mount: Deactivated successfully.
Oct 01 14:04:20 compute-0 ceph-mon[74802]: pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:20 compute-0 podman[297488]: 2025-10-01 14:04:20.903923954 +0000 UTC m=+2.102237171 container remove a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:04:20 compute-0 systemd[1]: libpod-conmon-a03dd5658d5c936acfa7c770c6a6788f4440f0c841f9c215892c1b240b62af4c.scope: Deactivated successfully.
Oct 01 14:04:21 compute-0 podman[297528]: 2025-10-01 14:04:21.2014078 +0000 UTC m=+0.111592496 container create b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 01 14:04:21 compute-0 podman[297528]: 2025-10-01 14:04:21.129421073 +0000 UTC m=+0.039605809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:04:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:21 compute-0 systemd[1]: Started libpod-conmon-b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3.scope.
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.368 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.369 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:21 compute-0 podman[297528]: 2025-10-01 14:04:21.397335661 +0000 UTC m=+0.307520427 container init b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 14:04:21 compute-0 podman[297528]: 2025-10-01 14:04:21.412232433 +0000 UTC m=+0.322417139 container start b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:04:21 compute-0 podman[297528]: 2025-10-01 14:04:21.425115133 +0000 UTC m=+0.335299869 container attach b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:04:21 compute-0 ceph-mon[74802]: pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:21 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:04:21 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109430700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.806 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.987 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.988 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5009MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.989 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:04:21 compute-0 nova_compute[260022]: 2025-10-01 14:04:21.989 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.130 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.146 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.163 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance ed1a583a-b018-407d-9bb0-31b0d7eca6fd has allocations against this compute host but is not found in the database.
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.163 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.164 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.238 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:04:22 compute-0 naughty_black[297545]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:04:22 compute-0 naughty_black[297545]: --> relative data size: 1.0
Oct 01 14:04:22 compute-0 naughty_black[297545]: --> All data devices are unavailable
Oct 01 14:04:22 compute-0 systemd[1]: libpod-b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3.scope: Deactivated successfully.
Oct 01 14:04:22 compute-0 podman[297528]: 2025-10-01 14:04:22.597651243 +0000 UTC m=+1.507835989 container died b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct 01 14:04:22 compute-0 systemd[1]: libpod-b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3.scope: Consumed 1.117s CPU time.
Oct 01 14:04:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:04:22 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065319070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.696 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.707 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.731 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.734 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:04:22 compute-0 nova_compute[260022]: 2025-10-01 14:04:22.735 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:04:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3109430700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:04:22 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1065319070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ff8a1e9b84eaae49d90daedc13032588d2892c4c139246777535bc12f197734-merged.mount: Deactivated successfully.
Oct 01 14:04:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:23 compute-0 podman[297528]: 2025-10-01 14:04:23.815952067 +0000 UTC m=+2.726136773 container remove b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_black, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 14:04:23 compute-0 sudo[297422]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:23 compute-0 systemd[1]: libpod-conmon-b1832034e456d6d6c29453ec28bb0733f3f774f50ad0ea0b56e5a41f18b4b9f3.scope: Deactivated successfully.
Oct 01 14:04:23 compute-0 sudo[297630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:23 compute-0 sudo[297630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:23 compute-0 sudo[297630]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:24 compute-0 sudo[297655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:04:24 compute-0 sudo[297655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:24 compute-0 sudo[297655]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:24 compute-0 sudo[297680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:24 compute-0 sudo[297680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:24 compute-0 sudo[297680]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:24 compute-0 ceph-mon[74802]: pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:24 compute-0 sudo[297705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:04:24 compute-0 sudo[297705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.389552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464389645, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 251, "total_data_size": 3460883, "memory_usage": 3516296, "flush_reason": "Manual Compaction"}
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464671816, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3383793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35113, "largest_seqno": 37171, "table_properties": {"data_size": 3374430, "index_size": 5921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18755, "raw_average_key_size": 20, "raw_value_size": 3355818, "raw_average_value_size": 3592, "num_data_blocks": 263, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327237, "oldest_key_time": 1759327237, "file_creation_time": 1759327464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 282321 microseconds, and 12836 cpu microseconds.
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.671881) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3383793 bytes OK
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.671906) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.690985) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.691015) EVENT_LOG_v1 {"time_micros": 1759327464691005, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.691041) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3452243, prev total WAL file size 3452243, number of live WAL files 2.
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.692571) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3304KB)], [80(7322KB)]
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464692624, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10881903, "oldest_snapshot_seqno": -1}
Oct 01 14:04:24 compute-0 podman[297770]: 2025-10-01 14:04:24.642849701 +0000 UTC m=+0.037001046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:04:24 compute-0 podman[297770]: 2025-10-01 14:04:24.82953818 +0000 UTC m=+0.223689525 container create 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5809 keys, 9138023 bytes, temperature: kUnknown
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464842125, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9138023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9098712, "index_size": 23713, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 146593, "raw_average_key_size": 25, "raw_value_size": 8993199, "raw_average_value_size": 1548, "num_data_blocks": 967, "num_entries": 5809, "num_filter_entries": 5809, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327464, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.842354) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9138023 bytes
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.978809) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.8 rd, 61.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6323, records dropped: 514 output_compression: NoCompression
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.978861) EVENT_LOG_v1 {"time_micros": 1759327464978841, "job": 46, "event": "compaction_finished", "compaction_time_micros": 149572, "compaction_time_cpu_micros": 37015, "output_level": 6, "num_output_files": 1, "total_output_size": 9138023, "num_input_records": 6323, "num_output_records": 5809, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464980066, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327464982451, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.692447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:04:24 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:04:24.982526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:04:25 compute-0 systemd[1]: Started libpod-conmon-66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3.scope.
Oct 01 14:04:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:04:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:25 compute-0 podman[297770]: 2025-10-01 14:04:25.563427861 +0000 UTC m=+0.957579246 container init 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:04:25 compute-0 podman[297770]: 2025-10-01 14:04:25.581244497 +0000 UTC m=+0.975395842 container start 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:04:25 compute-0 priceless_albattani[297786]: 167 167
Oct 01 14:04:25 compute-0 systemd[1]: libpod-66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3.scope: Deactivated successfully.
Oct 01 14:04:25 compute-0 conmon[297786]: conmon 66308f67a2757ee337ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3.scope/container/memory.events
Oct 01 14:04:25 compute-0 nova_compute[260022]: 2025-10-01 14:04:25.736 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:25 compute-0 podman[297770]: 2025-10-01 14:04:25.913260019 +0000 UTC m=+1.307411414 container attach 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 14:04:25 compute-0 podman[297770]: 2025-10-01 14:04:25.915021715 +0000 UTC m=+1.309173110 container died 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:04:25 compute-0 ceph-mon[74802]: pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed78117063b5828c81d28d46912a2914c6353569f16cb46dae5405f8449f8bcd-merged.mount: Deactivated successfully.
Oct 01 14:04:26 compute-0 podman[297770]: 2025-10-01 14:04:26.507106515 +0000 UTC m=+1.901257850 container remove 66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:04:26 compute-0 systemd[1]: libpod-conmon-66308f67a2757ee337aca0a37071fe25c135cb7a00625c655d0381aabd27f3d3.scope: Deactivated successfully.
Oct 01 14:04:26 compute-0 podman[297810]: 2025-10-01 14:04:26.723181456 +0000 UTC m=+0.044510124 container create 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:04:26 compute-0 systemd[1]: Started libpod-conmon-5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146.scope.
Oct 01 14:04:26 compute-0 podman[297810]: 2025-10-01 14:04:26.702963184 +0000 UTC m=+0.024291872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:04:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:26 compute-0 podman[297810]: 2025-10-01 14:04:26.82347548 +0000 UTC m=+0.144804168 container init 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:04:26 compute-0 podman[297810]: 2025-10-01 14:04:26.829997368 +0000 UTC m=+0.151326046 container start 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct 01 14:04:26 compute-0 podman[297810]: 2025-10-01 14:04:26.833892622 +0000 UTC m=+0.155221320 container attach 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 14:04:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:27 compute-0 modest_davinci[297827]: {
Oct 01 14:04:27 compute-0 modest_davinci[297827]:     "0": [
Oct 01 14:04:27 compute-0 modest_davinci[297827]:         {
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "devices": [
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "/dev/loop3"
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             ],
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_name": "ceph_lv0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_size": "21470642176",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "name": "ceph_lv0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "tags": {
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.cluster_name": "ceph",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.crush_device_class": "",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.encrypted": "0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.osd_id": "0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.type": "block",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.vdo": "0"
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             },
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "type": "block",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "vg_name": "ceph_vg0"
Oct 01 14:04:27 compute-0 modest_davinci[297827]:         }
Oct 01 14:04:27 compute-0 modest_davinci[297827]:     ],
Oct 01 14:04:27 compute-0 modest_davinci[297827]:     "1": [
Oct 01 14:04:27 compute-0 modest_davinci[297827]:         {
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "devices": [
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "/dev/loop4"
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             ],
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_name": "ceph_lv1",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_size": "21470642176",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "name": "ceph_lv1",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "tags": {
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.cluster_name": "ceph",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.crush_device_class": "",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.encrypted": "0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.osd_id": "1",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.type": "block",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.vdo": "0"
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             },
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "type": "block",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "vg_name": "ceph_vg1"
Oct 01 14:04:27 compute-0 modest_davinci[297827]:         }
Oct 01 14:04:27 compute-0 modest_davinci[297827]:     ],
Oct 01 14:04:27 compute-0 modest_davinci[297827]:     "2": [
Oct 01 14:04:27 compute-0 modest_davinci[297827]:         {
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "devices": [
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "/dev/loop5"
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             ],
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_name": "ceph_lv2",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_size": "21470642176",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "name": "ceph_lv2",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "tags": {
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.cluster_name": "ceph",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.crush_device_class": "",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.encrypted": "0",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.osd_id": "2",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.type": "block",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:                 "ceph.vdo": "0"
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             },
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "type": "block",
Oct 01 14:04:27 compute-0 modest_davinci[297827]:             "vg_name": "ceph_vg2"
Oct 01 14:04:27 compute-0 modest_davinci[297827]:         }
Oct 01 14:04:27 compute-0 modest_davinci[297827]:     ]
Oct 01 14:04:27 compute-0 modest_davinci[297827]: }
Oct 01 14:04:27 compute-0 systemd[1]: libpod-5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146.scope: Deactivated successfully.
Oct 01 14:04:27 compute-0 podman[297810]: 2025-10-01 14:04:27.643662223 +0000 UTC m=+0.964990921 container died 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:04:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8432ceb636eec2ba428bf57e1bd679ceb1093a3b6ccbb268e60161617953deeb-merged.mount: Deactivated successfully.
Oct 01 14:04:27 compute-0 podman[297810]: 2025-10-01 14:04:27.703893956 +0000 UTC m=+1.025222624 container remove 5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_davinci, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 14:04:27 compute-0 systemd[1]: libpod-conmon-5a38f18b646b0773e41c36e01fa68e3539014d51c2d1fb996d5479b7b11c1146.scope: Deactivated successfully.
Oct 01 14:04:27 compute-0 sudo[297705]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:27 compute-0 podman[297844]: 2025-10-01 14:04:27.757004142 +0000 UTC m=+0.075283261 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 14:04:27 compute-0 podman[297845]: 2025-10-01 14:04:27.758015574 +0000 UTC m=+0.082381647 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid)
Oct 01 14:04:27 compute-0 podman[297837]: 2025-10-01 14:04:27.759090608 +0000 UTC m=+0.088310105 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:04:27 compute-0 podman[297846]: 2025-10-01 14:04:27.786496998 +0000 UTC m=+0.097268809 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 14:04:27 compute-0 sudo[297923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:27 compute-0 sudo[297923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:27 compute-0 sudo[297923]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:27 compute-0 sudo[297949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:04:27 compute-0 sudo[297949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:27 compute-0 sudo[297949]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:27 compute-0 sudo[297974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:27 compute-0 sudo[297974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:27 compute-0 sudo[297974]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:27 compute-0 sudo[297999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:04:27 compute-0 sudo[297999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:28 compute-0 nova_compute[260022]: 2025-10-01 14:04:28.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:28 compute-0 ceph-mon[74802]: pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:28 compute-0 podman[298067]: 2025-10-01 14:04:28.427643925 +0000 UTC m=+0.049236903 container create c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:04:28 compute-0 systemd[1]: Started libpod-conmon-c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0.scope.
Oct 01 14:04:28 compute-0 podman[298067]: 2025-10-01 14:04:28.405005257 +0000 UTC m=+0.026598265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:04:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:04:28 compute-0 podman[298067]: 2025-10-01 14:04:28.531540994 +0000 UTC m=+0.153134042 container init c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:04:28 compute-0 podman[298067]: 2025-10-01 14:04:28.542925876 +0000 UTC m=+0.164518884 container start c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 14:04:28 compute-0 podman[298067]: 2025-10-01 14:04:28.546792579 +0000 UTC m=+0.168385587 container attach c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:04:28 compute-0 agitated_merkle[298084]: 167 167
Oct 01 14:04:28 compute-0 systemd[1]: libpod-c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0.scope: Deactivated successfully.
Oct 01 14:04:28 compute-0 conmon[298084]: conmon c861c14ce59cf720a610 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0.scope/container/memory.events
Oct 01 14:04:28 compute-0 podman[298067]: 2025-10-01 14:04:28.548526204 +0000 UTC m=+0.170119172 container died c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f520a843bf83b957aa10ca3d26ef4ed98426b5e83adc0ab8b877291f37453c60-merged.mount: Deactivated successfully.
Oct 01 14:04:28 compute-0 podman[298067]: 2025-10-01 14:04:28.584687202 +0000 UTC m=+0.206280170 container remove c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:04:28 compute-0 systemd[1]: libpod-conmon-c861c14ce59cf720a6107fa8ab628dc15524dd62a1f457c8914fc17d1b4177d0.scope: Deactivated successfully.
Oct 01 14:04:28 compute-0 podman[298107]: 2025-10-01 14:04:28.764099939 +0000 UTC m=+0.040431454 container create 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:04:28 compute-0 systemd[1]: Started libpod-conmon-472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c.scope.
Oct 01 14:04:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:04:28 compute-0 podman[298107]: 2025-10-01 14:04:28.838099269 +0000 UTC m=+0.114430844 container init 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 14:04:28 compute-0 podman[298107]: 2025-10-01 14:04:28.747925865 +0000 UTC m=+0.024257400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:04:28 compute-0 podman[298107]: 2025-10-01 14:04:28.847893259 +0000 UTC m=+0.124224774 container start 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 14:04:28 compute-0 podman[298107]: 2025-10-01 14:04:28.851552576 +0000 UTC m=+0.127884111 container attach 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:04:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:29 compute-0 nova_compute[260022]: 2025-10-01 14:04:29.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]: {
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "osd_id": 0,
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "type": "bluestore"
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:     },
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "osd_id": 2,
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "type": "bluestore"
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:     },
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "osd_id": 1,
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:         "type": "bluestore"
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]:     }
Oct 01 14:04:29 compute-0 affectionate_dirac[298124]: }
Oct 01 14:04:29 compute-0 systemd[1]: libpod-472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c.scope: Deactivated successfully.
Oct 01 14:04:29 compute-0 podman[298107]: 2025-10-01 14:04:29.865860902 +0000 UTC m=+1.142192427 container died 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:04:29 compute-0 systemd[1]: libpod-472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c.scope: Consumed 1.023s CPU time.
Oct 01 14:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cca1a76dadf543dfe02635d188de71c06ea0f923461da1b61d1ae6c76d052f77-merged.mount: Deactivated successfully.
Oct 01 14:04:30 compute-0 podman[298107]: 2025-10-01 14:04:30.331973652 +0000 UTC m=+1.608305207 container remove 472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 14:04:30 compute-0 systemd[1]: libpod-conmon-472100a411d8ac52c1e57a2e9608a5a34f951249288ab663091c9e71d152a69c.scope: Deactivated successfully.
Oct 01 14:04:30 compute-0 nova_compute[260022]: 2025-10-01 14:04:30.348 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:30 compute-0 nova_compute[260022]: 2025-10-01 14:04:30.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:04:30 compute-0 nova_compute[260022]: 2025-10-01 14:04:30.348 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:04:30 compute-0 nova_compute[260022]: 2025-10-01 14:04:30.369 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:04:30 compute-0 nova_compute[260022]: 2025-10-01 14:04:30.370 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:30 compute-0 sudo[297999]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:04:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:04:30 compute-0 ceph-mon[74802]: pgmap v1847: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:30 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:30 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 79f48b31-3a26-40e4-ba9d-6426c8896d07 does not exist
Oct 01 14:04:30 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 4ae6580a-5e29-4e32-b044-32af3bde5940 does not exist
Oct 01 14:04:30 compute-0 sudo[298169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:04:30 compute-0 sudo[298169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:30 compute-0 sudo[298169]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:30 compute-0 sudo[298194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:04:30 compute-0 sudo[298194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:04:30 compute-0 sudo[298194]: pam_unix(sudo:session): session closed for user root
Oct 01 14:04:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:31 compute-0 nova_compute[260022]: 2025-10-01 14:04:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:31 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:04:31 compute-0 ceph-mon[74802]: pgmap v1848: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:34 compute-0 ceph-mon[74802]: pgmap v1849: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:35 compute-0 ceph-mon[74802]: pgmap v1850: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:35.551 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:04:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:35.552 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:04:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:38 compute-0 ceph-mon[74802]: pgmap v1851: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:39 compute-0 ceph-mon[74802]: pgmap v1852: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:40 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:04:40.554 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:04:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:42 compute-0 ceph-mon[74802]: pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:43 compute-0 ceph-mon[74802]: pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:44 compute-0 nova_compute[260022]: 2025-10-01 14:04:44.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:04:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:46 compute-0 ceph-mon[74802]: pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:04:47
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'vms']
Oct 01 14:04:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:04:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:04:48 compute-0 ceph-mon[74802]: pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:49 compute-0 ceph-mon[74802]: pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:52 compute-0 ceph-mon[74802]: pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:54 compute-0 ceph-mon[74802]: pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:04:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3908796319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:04:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:04:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3908796319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:04:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3908796319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:04:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3908796319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:04:56 compute-0 ceph-mon[74802]: pgmap v1860: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:57 compute-0 ceph-mon[74802]: pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:04:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:04:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:04:58 compute-0 podman[298220]: 2025-10-01 14:04:58.517833776 +0000 UTC m=+0.071566233 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:04:58 compute-0 podman[298222]: 2025-10-01 14:04:58.528875697 +0000 UTC m=+0.070604493 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 14:04:58 compute-0 podman[298221]: 2025-10-01 14:04:58.533832544 +0000 UTC m=+0.083380798 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 14:04:58 compute-0 podman[298219]: 2025-10-01 14:04:58.551522586 +0000 UTC m=+0.104813489 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 01 14:04:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:00 compute-0 ceph-mon[74802]: pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:01 compute-0 ceph-mon[74802]: pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:03 compute-0 ceph-mon[74802]: pgmap v1864: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:05 compute-0 ceph-mon[74802]: pgmap v1865: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:07 compute-0 ceph-mon[74802]: pgmap v1866: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:09 compute-0 ceph-mon[74802]: pgmap v1867: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:11 compute-0 ceph-mon[74802]: pgmap v1868: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:12.331 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:05:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:12.332 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:05:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:12.333 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:05:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:13.720 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:38:a8 10.100.0.2 2001:db8::f816:3eff:fe38:38a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe38:38a8/64', 'neutron:device_id': 'ovnmeta-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ebfed82-d7c9-4432-b11e-589de366cfae, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=88ea7860-b2d6-4618-814f-53352d1b5566) old=Port_Binding(mac=['fa:16:3e:38:38:a8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:05:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:13.722 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 88ea7860-b2d6-4618-814f-53352d1b5566 in datapath 71bcb114-f0e3-490a-8b09-1cfd544476b4 updated
Oct 01 14:05:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:13.723 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71bcb114-f0e3-490a-8b09-1cfd544476b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 14:05:13 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:13.724 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[4529d8bf-9278-41f7-8f65-f761f590c29f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 14:05:14 compute-0 ceph-mon[74802]: pgmap v1869: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:15 compute-0 nova_compute[260022]: 2025-10-01 14:05:15.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:16 compute-0 ceph-mon[74802]: pgmap v1870: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:17 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:17.210 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:38:a8 10.100.0.2 2001:db8:0:1:f816:3eff:fe38:38a8 2001:db8::f816:3eff:fe38:38a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe38:38a8/64 2001:db8::f816:3eff:fe38:38a8/64', 'neutron:device_id': 'ovnmeta-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4ebfed82-d7c9-4432-b11e-589de366cfae, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=88ea7860-b2d6-4618-814f-53352d1b5566) old=Port_Binding(mac=['fa:16:3e:38:38:a8 10.100.0.2 2001:db8::f816:3eff:fe38:38a8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe38:38a8/64', 'neutron:device_id': 'ovnmeta-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71bcb114-f0e3-490a-8b09-1cfd544476b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:05:17 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:17.211 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 88ea7860-b2d6-4618-814f-53352d1b5566 in datapath 71bcb114-f0e3-490a-8b09-1cfd544476b4 updated
Oct 01 14:05:17 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:17.213 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71bcb114-f0e3-490a-8b09-1cfd544476b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 14:05:17 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:17.214 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[69ad2ce3-2847-4a4d-b758-a4da7e6a3756]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 14:05:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:17 compute-0 ceph-mon[74802]: pgmap v1871: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:05:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:05:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:05:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:05:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:05:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:05:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:19 compute-0 nova_compute[260022]: 2025-10-01 14:05:19.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:20 compute-0 ceph-mon[74802]: pgmap v1872: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:21 compute-0 nova_compute[260022]: 2025-10-01 14:05:21.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:21 compute-0 nova_compute[260022]: 2025-10-01 14:05:21.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:05:22 compute-0 ceph-mon[74802]: pgmap v1873: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.369 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.369 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.370 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.370 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.370 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:05:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:05:23 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082490945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.799 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.979 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.980 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5054MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.980 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:05:23 compute-0 nova_compute[260022]: 2025-10-01 14:05:23.980 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.060 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.073 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.087 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 88d7eb8f-28ed-4ee4-93c1-155f101dcd24 has allocations against this compute host but is not found in the database.
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.087 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.087 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.228 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:05:24 compute-0 ceph-mon[74802]: pgmap v1874: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:24 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4082490945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:05:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:05:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447927317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.679 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.684 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.699 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.700 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:05:24 compute-0 nova_compute[260022]: 2025-10-01 14:05:24.700 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:05:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:25 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3447927317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:05:26 compute-0 ceph-mon[74802]: pgmap v1875: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:27 compute-0 ceph-mon[74802]: pgmap v1876: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:27 compute-0 nova_compute[260022]: 2025-10-01 14:05:27.700 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:28 compute-0 nova_compute[260022]: 2025-10-01 14:05:28.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:29 compute-0 nova_compute[260022]: 2025-10-01 14:05:29.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:29 compute-0 nova_compute[260022]: 2025-10-01 14:05:29.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:29 compute-0 nova_compute[260022]: 2025-10-01 14:05:29.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 14:05:29 compute-0 nova_compute[260022]: 2025-10-01 14:05:29.358 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 14:05:29 compute-0 podman[298349]: 2025-10-01 14:05:29.530304974 +0000 UTC m=+0.061173714 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 14:05:29 compute-0 podman[298347]: 2025-10-01 14:05:29.530297894 +0000 UTC m=+0.068246418 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct 01 14:05:29 compute-0 podman[298348]: 2025-10-01 14:05:29.542419588 +0000 UTC m=+0.079941799 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:05:29 compute-0 podman[298346]: 2025-10-01 14:05:29.551945391 +0000 UTC m=+0.094303555 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller)
Oct 01 14:05:29 compute-0 ceph-mon[74802]: pgmap v1877: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:30 compute-0 sudo[298426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:05:30 compute-0 sudo[298426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:30 compute-0 sudo[298426]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:30 compute-0 sudo[298451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:05:30 compute-0 sudo[298451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:30 compute-0 sudo[298451]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:31 compute-0 sudo[298476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:05:31 compute-0 sudo[298476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:31 compute-0 sudo[298476]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:31 compute-0 sudo[298501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:05:31 compute-0 sudo[298501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:31 compute-0 nova_compute[260022]: 2025-10-01 14:05:31.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:31 compute-0 nova_compute[260022]: 2025-10-01 14:05:31.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:05:31 compute-0 nova_compute[260022]: 2025-10-01 14:05:31.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:05:31 compute-0 nova_compute[260022]: 2025-10-01 14:05:31.374 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:05:31 compute-0 nova_compute[260022]: 2025-10-01 14:05:31.374 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:31 compute-0 sudo[298501]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:05:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:05:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:05:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:05:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:05:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:05:31 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 550061fc-e857-472f-b8e8-9a7e232aaea2 does not exist
Oct 01 14:05:31 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ce22cbe1-9a1b-43bf-a1a7-b673db0090bd does not exist
Oct 01 14:05:31 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 0c13f29e-a5ca-405b-af4f-7f15a43ba6ac does not exist
Oct 01 14:05:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:05:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:05:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:05:31 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:05:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:05:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:05:31 compute-0 sudo[298559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:05:31 compute-0 sudo[298559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:31 compute-0 sudo[298559]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:31 compute-0 sudo[298584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:05:31 compute-0 sudo[298584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:31 compute-0 sudo[298584]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:32 compute-0 sudo[298609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:05:32 compute-0 sudo[298609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:32 compute-0 sudo[298609]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:32 compute-0 sudo[298634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:05:32 compute-0 sudo[298634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:32 compute-0 nova_compute[260022]: 2025-10-01 14:05:32.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:32 compute-0 ceph-mon[74802]: pgmap v1878: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:05:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:05:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:05:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:05:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:05:32 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:05:32 compute-0 podman[298699]: 2025-10-01 14:05:32.591788382 +0000 UTC m=+0.119955800 container create 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:05:32 compute-0 podman[298699]: 2025-10-01 14:05:32.499556823 +0000 UTC m=+0.027724231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:05:32 compute-0 systemd[1]: Started libpod-conmon-37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd.scope.
Oct 01 14:05:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:05:32 compute-0 podman[298699]: 2025-10-01 14:05:32.747224448 +0000 UTC m=+0.275391936 container init 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 14:05:32 compute-0 podman[298699]: 2025-10-01 14:05:32.757652989 +0000 UTC m=+0.285820407 container start 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 14:05:32 compute-0 strange_galois[298716]: 167 167
Oct 01 14:05:32 compute-0 systemd[1]: libpod-37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd.scope: Deactivated successfully.
Oct 01 14:05:32 compute-0 podman[298699]: 2025-10-01 14:05:32.796940926 +0000 UTC m=+0.325108404 container attach 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:05:32 compute-0 podman[298699]: 2025-10-01 14:05:32.798214346 +0000 UTC m=+0.326381734 container died 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:05:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d625dafbb46cda5464d019e685132f6cc036636bb7e434e2469895fe0ad6bcef-merged.mount: Deactivated successfully.
Oct 01 14:05:33 compute-0 podman[298699]: 2025-10-01 14:05:33.110718419 +0000 UTC m=+0.638885837 container remove 37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galois, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:05:33 compute-0 systemd[1]: libpod-conmon-37c6fd300db87762dfe31be651fc702f488fc6112f3d591c6b818a23013b3ccd.scope: Deactivated successfully.
Oct 01 14:05:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:33 compute-0 podman[298742]: 2025-10-01 14:05:33.416785057 +0000 UTC m=+0.105284974 container create 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 14:05:33 compute-0 podman[298742]: 2025-10-01 14:05:33.338625585 +0000 UTC m=+0.027125512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:05:33 compute-0 systemd[1]: Started libpod-conmon-2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937.scope.
Oct 01 14:05:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:33 compute-0 podman[298742]: 2025-10-01 14:05:33.599311782 +0000 UTC m=+0.287811699 container init 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 14:05:33 compute-0 podman[298742]: 2025-10-01 14:05:33.608130602 +0000 UTC m=+0.296630519 container start 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 14:05:33 compute-0 podman[298742]: 2025-10-01 14:05:33.675531832 +0000 UTC m=+0.364031769 container attach 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:05:34 compute-0 nervous_euclid[298759]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:05:34 compute-0 nervous_euclid[298759]: --> relative data size: 1.0
Oct 01 14:05:34 compute-0 nervous_euclid[298759]: --> All data devices are unavailable
Oct 01 14:05:34 compute-0 systemd[1]: libpod-2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937.scope: Deactivated successfully.
Oct 01 14:05:34 compute-0 podman[298742]: 2025-10-01 14:05:34.698401391 +0000 UTC m=+1.386901308 container died 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 14:05:34 compute-0 systemd[1]: libpod-2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937.scope: Consumed 1.046s CPU time.
Oct 01 14:05:34 compute-0 ceph-mon[74802]: pgmap v1879: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfcbece50978f941193095c88d8c027c0776b1cff56b97c747f03a2bfcb6a045-merged.mount: Deactivated successfully.
Oct 01 14:05:35 compute-0 podman[298742]: 2025-10-01 14:05:35.294017593 +0000 UTC m=+1.982517510 container remove 2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 14:05:35 compute-0 systemd[1]: libpod-conmon-2572413f2285224edf12b6ffa2cf14adde0822f1fceb55d35188558328a04937.scope: Deactivated successfully.
Oct 01 14:05:35 compute-0 sudo[298634]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:35 compute-0 sudo[298800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:05:35 compute-0 sudo[298800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:35 compute-0 sudo[298800]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:35 compute-0 sudo[298825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:05:35 compute-0 sudo[298825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:35 compute-0 sudo[298825]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:35 compute-0 sudo[298850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:05:35 compute-0 sudo[298850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:35 compute-0 sudo[298850]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:35 compute-0 sudo[298875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:05:35 compute-0 sudo[298875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:35 compute-0 ceph-mon[74802]: pgmap v1880: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:36 compute-0 podman[298941]: 2025-10-01 14:05:36.093771326 +0000 UTC m=+0.032541244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:05:36 compute-0 podman[298941]: 2025-10-01 14:05:36.217098432 +0000 UTC m=+0.155868300 container create 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:05:36 compute-0 systemd[1]: Started libpod-conmon-2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34.scope.
Oct 01 14:05:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:05:36 compute-0 podman[298941]: 2025-10-01 14:05:36.46771966 +0000 UTC m=+0.406489578 container init 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:05:36 compute-0 podman[298941]: 2025-10-01 14:05:36.479238685 +0000 UTC m=+0.418008563 container start 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:05:36 compute-0 condescending_haslett[298957]: 167 167
Oct 01 14:05:36 compute-0 systemd[1]: libpod-2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34.scope: Deactivated successfully.
Oct 01 14:05:36 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:36.485 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:05:36 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:36.489 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:05:36 compute-0 podman[298941]: 2025-10-01 14:05:36.546532891 +0000 UTC m=+0.485302749 container attach 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:05:36 compute-0 podman[298941]: 2025-10-01 14:05:36.548012599 +0000 UTC m=+0.486782537 container died 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 14:05:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-55f817f87ddd3a9c8cade059e62cfc5231112825c88a4e1dd9a89c29b347c2c0-merged.mount: Deactivated successfully.
Oct 01 14:05:36 compute-0 podman[298941]: 2025-10-01 14:05:36.98393075 +0000 UTC m=+0.922700628 container remove 2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 14:05:36 compute-0 systemd[1]: libpod-conmon-2c2a01b9a0ffdae70ded2335158f8df189d1a9238e065e5b8d6af4c1c3a63c34.scope: Deactivated successfully.
Oct 01 14:05:37 compute-0 podman[298984]: 2025-10-01 14:05:37.263130805 +0000 UTC m=+0.093908952 container create 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 01 14:05:37 compute-0 podman[298984]: 2025-10-01 14:05:37.207467398 +0000 UTC m=+0.038245595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:05:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:37 compute-0 systemd[1]: Started libpod-conmon-5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988.scope.
Oct 01 14:05:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:37 compute-0 podman[298984]: 2025-10-01 14:05:37.500781952 +0000 UTC m=+0.331560109 container init 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:05:37 compute-0 podman[298984]: 2025-10-01 14:05:37.521566171 +0000 UTC m=+0.352344328 container start 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 14:05:37 compute-0 podman[298984]: 2025-10-01 14:05:37.528582204 +0000 UTC m=+0.359360361 container attach 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 14:05:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]: {
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:     "0": [
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:         {
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "devices": [
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "/dev/loop3"
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             ],
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_name": "ceph_lv0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_size": "21470642176",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "name": "ceph_lv0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "tags": {
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.cluster_name": "ceph",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.crush_device_class": "",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.encrypted": "0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.osd_id": "0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.type": "block",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.vdo": "0"
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             },
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "type": "block",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "vg_name": "ceph_vg0"
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:         }
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:     ],
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:     "1": [
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:         {
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "devices": [
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "/dev/loop4"
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             ],
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_name": "ceph_lv1",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_size": "21470642176",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "name": "ceph_lv1",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "tags": {
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.cluster_name": "ceph",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.crush_device_class": "",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.encrypted": "0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.osd_id": "1",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.type": "block",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.vdo": "0"
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             },
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "type": "block",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "vg_name": "ceph_vg1"
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:         }
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:     ],
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:     "2": [
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:         {
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "devices": [
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "/dev/loop5"
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             ],
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_name": "ceph_lv2",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_size": "21470642176",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "name": "ceph_lv2",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "tags": {
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.cluster_name": "ceph",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.crush_device_class": "",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.encrypted": "0",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.osd_id": "2",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.type": "block",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:                 "ceph.vdo": "0"
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             },
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "type": "block",
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:             "vg_name": "ceph_vg2"
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:         }
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]:     ]
Oct 01 14:05:38 compute-0 crazy_heyrovsky[299002]: }
Oct 01 14:05:38 compute-0 systemd[1]: libpod-5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988.scope: Deactivated successfully.
Oct 01 14:05:38 compute-0 podman[298984]: 2025-10-01 14:05:38.309093697 +0000 UTC m=+1.139871824 container died 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 14:05:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a835f8d89e0b4f371c2c0cc400f2335ac96d058a0ff7f1b27353e7417398451-merged.mount: Deactivated successfully.
Oct 01 14:05:38 compute-0 podman[298984]: 2025-10-01 14:05:38.378673086 +0000 UTC m=+1.209451213 container remove 5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heyrovsky, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 14:05:38 compute-0 systemd[1]: libpod-conmon-5d08725e8a00514364dcaa7946e7e255195ae48bd8ebd4a2de3c64b3509fe988.scope: Deactivated successfully.
Oct 01 14:05:38 compute-0 sudo[298875]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:38 compute-0 ceph-mon[74802]: pgmap v1881: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:38 compute-0 sudo[299023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:05:38 compute-0 sudo[299023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:38 compute-0 sudo[299023]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:38 compute-0 sudo[299048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:05:38 compute-0 sudo[299048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:38 compute-0 sudo[299048]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:38 compute-0 sudo[299073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:05:38 compute-0 sudo[299073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:38 compute-0 sudo[299073]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:38 compute-0 sudo[299098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:05:38 compute-0 sudo[299098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:39 compute-0 podman[299164]: 2025-10-01 14:05:39.163743104 +0000 UTC m=+0.057601631 container create e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct 01 14:05:39 compute-0 systemd[1]: Started libpod-conmon-e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607.scope.
Oct 01 14:05:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:05:39 compute-0 podman[299164]: 2025-10-01 14:05:39.146797646 +0000 UTC m=+0.040656193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:05:39 compute-0 podman[299164]: 2025-10-01 14:05:39.248584697 +0000 UTC m=+0.142443304 container init e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 14:05:39 compute-0 podman[299164]: 2025-10-01 14:05:39.260242977 +0000 UTC m=+0.154101544 container start e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:05:39 compute-0 podman[299164]: 2025-10-01 14:05:39.265022969 +0000 UTC m=+0.158881536 container attach e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct 01 14:05:39 compute-0 nervous_goldstine[299181]: 167 167
Oct 01 14:05:39 compute-0 systemd[1]: libpod-e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607.scope: Deactivated successfully.
Oct 01 14:05:39 compute-0 podman[299186]: 2025-10-01 14:05:39.316761112 +0000 UTC m=+0.036540131 container died e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:05:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ff8acfa1558267fd6d879d679658c71532a69ac492ae32071388753fc16c4c6-merged.mount: Deactivated successfully.
Oct 01 14:05:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:39 compute-0 podman[299186]: 2025-10-01 14:05:39.372725179 +0000 UTC m=+0.092504188 container remove e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 14:05:39 compute-0 systemd[1]: libpod-conmon-e9100987ee36dbb444da0e237051ed63fb54f862c753eaf279e2fffb406a9607.scope: Deactivated successfully.
Oct 01 14:05:39 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:39.492 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:05:39 compute-0 podman[299209]: 2025-10-01 14:05:39.642759233 +0000 UTC m=+0.067808394 container create 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 14:05:39 compute-0 systemd[1]: Started libpod-conmon-7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816.scope.
Oct 01 14:05:39 compute-0 podman[299209]: 2025-10-01 14:05:39.614934039 +0000 UTC m=+0.039983280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:05:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:05:39 compute-0 podman[299209]: 2025-10-01 14:05:39.770918503 +0000 UTC m=+0.195967764 container init 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:05:39 compute-0 podman[299209]: 2025-10-01 14:05:39.783086938 +0000 UTC m=+0.208136099 container start 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:05:39 compute-0 podman[299209]: 2025-10-01 14:05:39.787423367 +0000 UTC m=+0.212472568 container attach 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:05:40 compute-0 ceph-mon[74802]: pgmap v1882: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:40 compute-0 exciting_jackson[299225]: {
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "osd_id": 0,
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "type": "bluestore"
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:     },
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "osd_id": 2,
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "type": "bluestore"
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:     },
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "osd_id": 1,
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:         "type": "bluestore"
Oct 01 14:05:40 compute-0 exciting_jackson[299225]:     }
Oct 01 14:05:40 compute-0 exciting_jackson[299225]: }
Oct 01 14:05:40 compute-0 systemd[1]: libpod-7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816.scope: Deactivated successfully.
Oct 01 14:05:40 compute-0 podman[299209]: 2025-10-01 14:05:40.84037569 +0000 UTC m=+1.265424931 container died 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:05:40 compute-0 systemd[1]: libpod-7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816.scope: Consumed 1.073s CPU time.
Oct 01 14:05:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fab4794d735fccac2b7ed1c909b9fc72e8d213ffeb4383b6eafb7a5b9a2a4734-merged.mount: Deactivated successfully.
Oct 01 14:05:40 compute-0 podman[299209]: 2025-10-01 14:05:40.900221039 +0000 UTC m=+1.325270210 container remove 7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_jackson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:05:40 compute-0 systemd[1]: libpod-conmon-7418a24275c42612a88218a6d72fdefb50338f2b70daa9c09897990e0f9ec816.scope: Deactivated successfully.
Oct 01 14:05:40 compute-0 sudo[299098]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:05:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:05:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:05:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:05:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8d37a268-1c5c-4868-801f-2abd3b19e5e7 does not exist
Oct 01 14:05:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7df467f7-f369-469b-936b-4746451d9b35 does not exist
Oct 01 14:05:41 compute-0 sudo[299273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:05:41 compute-0 sudo[299273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:41 compute-0 sudo[299273]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:41 compute-0 sudo[299298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:05:41 compute-0 sudo[299298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:05:41 compute-0 sudo[299298]: pam_unix(sudo:session): session closed for user root
Oct 01 14:05:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:05:41 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:05:41 compute-0 ceph-mon[74802]: pgmap v1883: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:42 compute-0 nova_compute[260022]: 2025-10-01 14:05:42.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:05:42 compute-0 nova_compute[260022]: 2025-10-01 14:05:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 14:05:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:44 compute-0 ceph-mon[74802]: pgmap v1884: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:46 compute-0 ceph-mon[74802]: pgmap v1885: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:05:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:05:47
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'backups', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct 01 14:05:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:05:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:05:48 compute-0 ceph-mon[74802]: pgmap v1886: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:49 compute-0 ceph-mon[74802]: pgmap v1887: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:50 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:50.815 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:44:f3 10.100.0.2 2001:db8::f816:3eff:fe42:44f3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe42:44f3/64', 'neutron:device_id': 'ovnmeta-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce2640b-c69b-48a5-ac25-0e680aa474d5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e6d43e21-e122-4885-b8fa-19349c7a5738) old=Port_Binding(mac=['fa:16:3e:42:44:f3 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:05:50 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:50.817 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e6d43e21-e122-4885-b8fa-19349c7a5738 in datapath 55c091cc-a453-4c16-90a2-45d57ba3ca96 updated
Oct 01 14:05:50 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:50.819 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55c091cc-a453-4c16-90a2-45d57ba3ca96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 14:05:50 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:50.820 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[29037719-cf39-47be-9a4e-343792148127]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 14:05:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:52 compute-0 ceph-mon[74802]: pgmap v1888: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:53 compute-0 ceph-mon[74802]: pgmap v1889: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:05:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2090342336' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:05:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:05:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2090342336' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:05:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2090342336' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:05:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2090342336' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:05:56 compute-0 ceph-mon[74802]: pgmap v1890: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:57 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:57.428 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:44:f3 10.100.0.2 2001:db8:0:1:f816:3eff:fe42:44f3 2001:db8::f816:3eff:fe42:44f3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe42:44f3/64 2001:db8::f816:3eff:fe42:44f3/64', 'neutron:device_id': 'ovnmeta-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ce2640b-c69b-48a5-ac25-0e680aa474d5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e6d43e21-e122-4885-b8fa-19349c7a5738) old=Port_Binding(mac=['fa:16:3e:42:44:f3 10.100.0.2 2001:db8::f816:3eff:fe42:44f3'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe42:44f3/64', 'neutron:device_id': 'ovnmeta-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c091cc-a453-4c16-90a2-45d57ba3ca96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:05:57 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:57.430 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e6d43e21-e122-4885-b8fa-19349c7a5738 in datapath 55c091cc-a453-4c16-90a2-45d57ba3ca96 updated
Oct 01 14:05:57 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:57.432 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55c091cc-a453-4c16-90a2-45d57ba3ca96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 14:05:57 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:05:57.433 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[fd419c41-38ac-4a34-aab3-863bd119c8e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 14:05:57 compute-0 ceph-mon[74802]: pgmap v1891: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:05:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:05:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:05:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:00 compute-0 ceph-mon[74802]: pgmap v1892: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:00 compute-0 podman[299331]: 2025-10-01 14:06:00.529126805 +0000 UTC m=+0.058558111 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 01 14:06:00 compute-0 podman[299325]: 2025-10-01 14:06:00.538578865 +0000 UTC m=+0.075178419 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:06:00 compute-0 podman[299324]: 2025-10-01 14:06:00.538953066 +0000 UTC m=+0.079633319 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20250923)
Oct 01 14:06:00 compute-0 podman[299323]: 2025-10-01 14:06:00.552636922 +0000 UTC m=+0.100991668 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:06:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:02 compute-0 ceph-mon[74802]: pgmap v1893: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:04 compute-0 ceph-mon[74802]: pgmap v1894: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:06 compute-0 ceph-mon[74802]: pgmap v1895: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:08 compute-0 ceph-mon[74802]: pgmap v1896: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:10 compute-0 ceph-mon[74802]: pgmap v1897: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:12.332 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:06:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:12.333 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:06:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:12.333 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:06:12 compute-0 ceph-mon[74802]: pgmap v1898: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:14 compute-0 ceph-mon[74802]: pgmap v1899: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 01 14:06:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:15 compute-0 nova_compute[260022]: 2025-10-01 14:06:15.872 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:16 compute-0 ceph-mon[74802]: pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:06:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:06:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:06:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:06:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:06:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:06:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:18 compute-0 ceph-mon[74802]: pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:20 compute-0 ceph-mon[74802]: pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:22 compute-0 ceph-mon[74802]: pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:23 compute-0 nova_compute[260022]: 2025-10-01 14:06:23.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:23 compute-0 nova_compute[260022]: 2025-10-01 14:06:23.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:06:23 compute-0 nova_compute[260022]: 2025-10-01 14:06:23.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:23 compute-0 nova_compute[260022]: 2025-10-01 14:06:23.405 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:06:23 compute-0 nova_compute[260022]: 2025-10-01 14:06:23.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:06:23 compute-0 nova_compute[260022]: 2025-10-01 14:06:23.406 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:06:23 compute-0 nova_compute[260022]: 2025-10-01 14:06:23.406 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:06:23 compute-0 nova_compute[260022]: 2025-10-01 14:06:23.407 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:06:23 compute-0 ceph-mon[74802]: pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:06:23 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513407464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:06:23 compute-0 nova_compute[260022]: 2025-10-01 14:06:23.868 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.090 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.091 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5062MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.092 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.092 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.297 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.317 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.318 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.318 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.380 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.405 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.406 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.422 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.446 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.488 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:06:24 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/513407464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:06:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:06:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586880281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.917 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.924 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.965 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.966 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:06:24 compute-0 nova_compute[260022]: 2025-10-01 14:06:24.966 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:06:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:25 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1586880281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:06:25 compute-0 ceph-mon[74802]: pgmap v1905: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:27 compute-0 nova_compute[260022]: 2025-10-01 14:06:27.966 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:28 compute-0 ceph-mon[74802]: pgmap v1906: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:29 compute-0 nova_compute[260022]: 2025-10-01 14:06:29.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:30 compute-0 nova_compute[260022]: 2025-10-01 14:06:30.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:30 compute-0 ceph-mon[74802]: pgmap v1907: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:31 compute-0 podman[299450]: 2025-10-01 14:06:31.551459463 +0000 UTC m=+0.085812137 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true)
Oct 01 14:06:31 compute-0 podman[299451]: 2025-10-01 14:06:31.565288512 +0000 UTC m=+0.094525222 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:06:31 compute-0 podman[299448]: 2025-10-01 14:06:31.565592122 +0000 UTC m=+0.109554500 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct 01 14:06:31 compute-0 podman[299449]: 2025-10-01 14:06:31.576722355 +0000 UTC m=+0.116662846 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd)
Oct 01 14:06:32 compute-0 ceph-mon[74802]: pgmap v1908: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:33 compute-0 nova_compute[260022]: 2025-10-01 14:06:33.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:33 compute-0 nova_compute[260022]: 2025-10-01 14:06:33.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:06:33 compute-0 nova_compute[260022]: 2025-10-01 14:06:33.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:06:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:33 compute-0 nova_compute[260022]: 2025-10-01 14:06:33.378 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:06:33 compute-0 nova_compute[260022]: 2025-10-01 14:06:33.379 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:34 compute-0 nova_compute[260022]: 2025-10-01 14:06:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:34 compute-0 ceph-mon[74802]: pgmap v1909: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:35.473 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:b7:39 10.100.0.2 2001:db8::f816:3eff:feba:b739'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feba:b739/64', 'neutron:device_id': 'ovnmeta-6b3c8992-1807-49a1-9a57-c5829337f33a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b3c8992-1807-49a1-9a57-c5829337f33a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=411903f3-2feb-4b6b-97c8-847900bcae09, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f15a5db8-1914-4c13-b5ae-3d12d5ed5f17) old=Port_Binding(mac=['fa:16:3e:ba:b7:39 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6b3c8992-1807-49a1-9a57-c5829337f33a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b3c8992-1807-49a1-9a57-c5829337f33a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7cb470b3bf042fe90ce061f7c990de4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:06:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:35.475 161890 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f15a5db8-1914-4c13-b5ae-3d12d5ed5f17 in datapath 6b3c8992-1807-49a1-9a57-c5829337f33a updated
Oct 01 14:06:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:35.477 161890 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b3c8992-1807-49a1-9a57-c5829337f33a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 01 14:06:35 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:35.478 291014 DEBUG oslo.privsep.daemon [-] privsep: reply[a91c30cc-e38f-49b6-8ceb-fc1079b3ab3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 01 14:06:36 compute-0 ceph-mon[74802]: pgmap v1910: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:36 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:36.757 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:06:36 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:36.758 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:06:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:38 compute-0 ceph-mon[74802]: pgmap v1911: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:40 compute-0 ceph-mon[74802]: pgmap v1912: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:41 compute-0 sudo[299523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:06:41 compute-0 sudo[299523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:41 compute-0 sudo[299523]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:41 compute-0 sudo[299548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:06:41 compute-0 sudo[299548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:41 compute-0 sudo[299548]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:41 compute-0 sudo[299573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:06:41 compute-0 sudo[299573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:41 compute-0 sudo[299573]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:41 compute-0 sudo[299598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:06:41 compute-0 sudo[299598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:41 compute-0 ceph-mon[74802]: pgmap v1913: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:41 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:06:41.761 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:06:42 compute-0 sudo[299598]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:06:42 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:06:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:06:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:06:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:06:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:06:42 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5447307d-0487-4179-a998-aa3fca53fcba does not exist
Oct 01 14:06:42 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev eed51320-96ad-4143-adf8-5309d262ca95 does not exist
Oct 01 14:06:42 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev fa8d65a1-403a-4135-bb22-db8103926537 does not exist
Oct 01 14:06:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:06:42 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:06:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:06:42 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:06:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:06:42 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:06:42 compute-0 sudo[299654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:06:42 compute-0 sudo[299654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:42 compute-0 sudo[299654]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:42 compute-0 sudo[299679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:06:42 compute-0 sudo[299679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:42 compute-0 sudo[299679]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:42 compute-0 sudo[299704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:06:42 compute-0 sudo[299704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:42 compute-0 sudo[299704]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:42 compute-0 sudo[299729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:06:42 compute-0 sudo[299729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:06:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:06:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:06:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:06:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:06:42 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:06:42 compute-0 podman[299794]: 2025-10-01 14:06:42.784226475 +0000 UTC m=+0.045387431 container create c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 14:06:42 compute-0 systemd[1]: Started libpod-conmon-c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961.scope.
Oct 01 14:06:42 compute-0 podman[299794]: 2025-10-01 14:06:42.767193525 +0000 UTC m=+0.028354501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:06:42 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:06:42 compute-0 podman[299794]: 2025-10-01 14:06:42.885332936 +0000 UTC m=+0.146493892 container init c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:06:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:42 compute-0 podman[299794]: 2025-10-01 14:06:42.89867211 +0000 UTC m=+0.159833066 container start c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 14:06:42 compute-0 podman[299794]: 2025-10-01 14:06:42.901603193 +0000 UTC m=+0.162764149 container attach c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 01 14:06:42 compute-0 nice_beaver[299810]: 167 167
Oct 01 14:06:42 compute-0 systemd[1]: libpod-c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961.scope: Deactivated successfully.
Oct 01 14:06:42 compute-0 podman[299794]: 2025-10-01 14:06:42.909548735 +0000 UTC m=+0.170709691 container died c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:06:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9917bc67cdfb5b026732375781709f6f6328843cb91de993270ac5d4eebaee68-merged.mount: Deactivated successfully.
Oct 01 14:06:42 compute-0 podman[299794]: 2025-10-01 14:06:42.95317442 +0000 UTC m=+0.214335376 container remove c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:06:42 compute-0 systemd[1]: libpod-conmon-c8fb9f8c007ce9d3756cc78420149070802599cf30386deac9ebc5b168997961.scope: Deactivated successfully.
Oct 01 14:06:43 compute-0 podman[299836]: 2025-10-01 14:06:43.146418696 +0000 UTC m=+0.054793820 container create 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 14:06:43 compute-0 systemd[1]: Started libpod-conmon-0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4.scope.
Oct 01 14:06:43 compute-0 podman[299836]: 2025-10-01 14:06:43.118230551 +0000 UTC m=+0.026605725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:06:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:43 compute-0 podman[299836]: 2025-10-01 14:06:43.248305061 +0000 UTC m=+0.156680225 container init 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:06:43 compute-0 podman[299836]: 2025-10-01 14:06:43.265859118 +0000 UTC m=+0.174234242 container start 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:06:43 compute-0 podman[299836]: 2025-10-01 14:06:43.271483667 +0000 UTC m=+0.179858831 container attach 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 14:06:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:43 compute-0 ceph-mon[74802]: pgmap v1914: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:44 compute-0 brave_knuth[299852]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:06:44 compute-0 brave_knuth[299852]: --> relative data size: 1.0
Oct 01 14:06:44 compute-0 brave_knuth[299852]: --> All data devices are unavailable
Oct 01 14:06:44 compute-0 systemd[1]: libpod-0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4.scope: Deactivated successfully.
Oct 01 14:06:44 compute-0 podman[299836]: 2025-10-01 14:06:44.421584345 +0000 UTC m=+1.329959489 container died 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:06:44 compute-0 systemd[1]: libpod-0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4.scope: Consumed 1.103s CPU time.
Oct 01 14:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cbb559851fb8a860746dfb18eb8f4807706d856e6862df0c37712a574806d15-merged.mount: Deactivated successfully.
Oct 01 14:06:44 compute-0 podman[299836]: 2025-10-01 14:06:44.498094834 +0000 UTC m=+1.406470118 container remove 0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_knuth, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:06:44 compute-0 systemd[1]: libpod-conmon-0e5e9b6b87f394b5237947463f7ee320b008e6cb303dace1aca0abb0e8e69eb4.scope: Deactivated successfully.
Oct 01 14:06:44 compute-0 sudo[299729]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:44 compute-0 sudo[299894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:06:44 compute-0 sudo[299894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:44 compute-0 sudo[299894]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:44 compute-0 sudo[299919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:06:44 compute-0 sudo[299919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:44 compute-0 sudo[299919]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:44 compute-0 sudo[299944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:06:44 compute-0 sudo[299944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:44 compute-0 sudo[299944]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:44 compute-0 sudo[299969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:06:44 compute-0 sudo[299969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:45 compute-0 podman[300034]: 2025-10-01 14:06:45.24090685 +0000 UTC m=+0.050643689 container create ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:06:45 compute-0 systemd[1]: Started libpod-conmon-ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba.scope.
Oct 01 14:06:45 compute-0 podman[300034]: 2025-10-01 14:06:45.214812401 +0000 UTC m=+0.024549350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:06:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:06:45 compute-0 podman[300034]: 2025-10-01 14:06:45.338063655 +0000 UTC m=+0.147800594 container init ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 14:06:45 compute-0 podman[300034]: 2025-10-01 14:06:45.343718644 +0000 UTC m=+0.153455473 container start ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:06:45 compute-0 podman[300034]: 2025-10-01 14:06:45.346913776 +0000 UTC m=+0.156650725 container attach ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:06:45 compute-0 condescending_lumiere[300051]: 167 167
Oct 01 14:06:45 compute-0 systemd[1]: libpod-ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba.scope: Deactivated successfully.
Oct 01 14:06:45 compute-0 podman[300034]: 2025-10-01 14:06:45.349194578 +0000 UTC m=+0.158931427 container died ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-50ead27a6ce8af2575fff3fafef78fed186e268eed352c62384ef28f74474c94-merged.mount: Deactivated successfully.
Oct 01 14:06:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:45 compute-0 podman[300034]: 2025-10-01 14:06:45.39552939 +0000 UTC m=+0.205266229 container remove ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 14:06:45 compute-0 systemd[1]: libpod-conmon-ad162e9a7f02da8a606c5028644516fac7e5c352cc34f0578fb5f99dd9e5a7ba.scope: Deactivated successfully.
Oct 01 14:06:45 compute-0 podman[300074]: 2025-10-01 14:06:45.580859005 +0000 UTC m=+0.039469734 container create 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 14:06:45 compute-0 systemd[1]: Started libpod-conmon-4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97.scope.
Oct 01 14:06:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:45 compute-0 podman[300074]: 2025-10-01 14:06:45.565551239 +0000 UTC m=+0.024162008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:06:45 compute-0 podman[300074]: 2025-10-01 14:06:45.666917867 +0000 UTC m=+0.125528616 container init 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:06:45 compute-0 podman[300074]: 2025-10-01 14:06:45.675758338 +0000 UTC m=+0.134369077 container start 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 14:06:45 compute-0 podman[300074]: 2025-10-01 14:06:45.679824787 +0000 UTC m=+0.138435526 container attach 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]: {
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:     "0": [
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:         {
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "devices": [
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "/dev/loop3"
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             ],
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_name": "ceph_lv0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_size": "21470642176",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "name": "ceph_lv0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "tags": {
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.cluster_name": "ceph",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.crush_device_class": "",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.encrypted": "0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.osd_id": "0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.type": "block",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.vdo": "0"
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             },
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "type": "block",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "vg_name": "ceph_vg0"
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:         }
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:     ],
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:     "1": [
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:         {
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "devices": [
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "/dev/loop4"
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             ],
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_name": "ceph_lv1",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_size": "21470642176",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "name": "ceph_lv1",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "tags": {
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.cluster_name": "ceph",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.crush_device_class": "",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.encrypted": "0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.osd_id": "1",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.type": "block",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.vdo": "0"
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             },
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "type": "block",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "vg_name": "ceph_vg1"
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:         }
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:     ],
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:     "2": [
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:         {
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "devices": [
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "/dev/loop5"
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             ],
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_name": "ceph_lv2",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_size": "21470642176",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "name": "ceph_lv2",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "tags": {
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.cluster_name": "ceph",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.crush_device_class": "",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.encrypted": "0",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.osd_id": "2",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.type": "block",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:                 "ceph.vdo": "0"
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             },
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "type": "block",
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:             "vg_name": "ceph_vg2"
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:         }
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]:     ]
Oct 01 14:06:46 compute-0 hopeful_kalam[300091]: }
Oct 01 14:06:46 compute-0 ceph-mon[74802]: pgmap v1915: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:46 compute-0 systemd[1]: libpod-4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97.scope: Deactivated successfully.
Oct 01 14:06:46 compute-0 podman[300100]: 2025-10-01 14:06:46.514079786 +0000 UTC m=+0.043474751 container died 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 14:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-96da54ca1fdb8aeb7ef5659613640499a50b3f117626d2af128e079c8c558b6b-merged.mount: Deactivated successfully.
Oct 01 14:06:46 compute-0 podman[300100]: 2025-10-01 14:06:46.579868285 +0000 UTC m=+0.109263190 container remove 4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:06:46 compute-0 systemd[1]: libpod-conmon-4470748394aabf93609366ebd4519d151703db26625dcc3c6878e13d23ffff97.scope: Deactivated successfully.
Oct 01 14:06:46 compute-0 sudo[299969]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:46 compute-0 sudo[300116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:06:46 compute-0 sudo[300116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:46 compute-0 sudo[300116]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:46 compute-0 sudo[300141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:06:46 compute-0 sudo[300141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:46 compute-0 sudo[300141]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:46 compute-0 sudo[300166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:06:46 compute-0 sudo[300166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:46 compute-0 sudo[300166]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:47 compute-0 sudo[300191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:06:47 compute-0 sudo[300191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:47 compute-0 podman[300258]: 2025-10-01 14:06:47.412208283 +0000 UTC m=+0.041960494 container create 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 14:06:47 compute-0 systemd[1]: Started libpod-conmon-969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040.scope.
Oct 01 14:06:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:06:47 compute-0 podman[300258]: 2025-10-01 14:06:47.39225374 +0000 UTC m=+0.022005981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:06:47 compute-0 podman[300258]: 2025-10-01 14:06:47.504302697 +0000 UTC m=+0.134054918 container init 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:06:47 compute-0 podman[300258]: 2025-10-01 14:06:47.515179993 +0000 UTC m=+0.144932194 container start 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 14:06:47 compute-0 podman[300258]: 2025-10-01 14:06:47.519612963 +0000 UTC m=+0.149365204 container attach 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:06:47 compute-0 gracious_chaplygin[300275]: 167 167
Oct 01 14:06:47 compute-0 systemd[1]: libpod-969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040.scope: Deactivated successfully.
Oct 01 14:06:47 compute-0 podman[300258]: 2025-10-01 14:06:47.523555919 +0000 UTC m=+0.153308160 container died 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 14:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e77f7ca88593617ce56341d305eee3a0e01772e6567857a70ccf7973c7c0deb-merged.mount: Deactivated successfully.
Oct 01 14:06:47 compute-0 podman[300258]: 2025-10-01 14:06:47.571602894 +0000 UTC m=+0.201355135 container remove 969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:06:47 compute-0 systemd[1]: libpod-conmon-969da347c134f753d98ba09ae0946031866d281120b2b847eb7ed09ba968b040.scope: Deactivated successfully.
Oct 01 14:06:47 compute-0 podman[300299]: 2025-10-01 14:06:47.822224452 +0000 UTC m=+0.062325160 container create 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:06:47 compute-0 systemd[1]: Started libpod-conmon-65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532.scope.
Oct 01 14:06:47 compute-0 podman[300299]: 2025-10-01 14:06:47.79666693 +0000 UTC m=+0.036767688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:06:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:47 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:06:47
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Oct 01 14:06:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:06:47 compute-0 podman[300299]: 2025-10-01 14:06:47.921830274 +0000 UTC m=+0.161930962 container init 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 14:06:47 compute-0 podman[300299]: 2025-10-01 14:06:47.92737757 +0000 UTC m=+0.167478248 container start 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:06:47 compute-0 podman[300299]: 2025-10-01 14:06:47.930617813 +0000 UTC m=+0.170718511 container attach 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:06:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:06:48 compute-0 nova_compute[260022]: 2025-10-01 14:06:48.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:06:48 compute-0 ceph-mon[74802]: pgmap v1916: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]: {
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "osd_id": 0,
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "type": "bluestore"
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:     },
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "osd_id": 2,
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "type": "bluestore"
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:     },
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "osd_id": 1,
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:         "type": "bluestore"
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]:     }
Oct 01 14:06:48 compute-0 infallible_jepsen[300316]: }
Oct 01 14:06:48 compute-0 systemd[1]: libpod-65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532.scope: Deactivated successfully.
Oct 01 14:06:48 compute-0 podman[300299]: 2025-10-01 14:06:48.946232061 +0000 UTC m=+1.186332799 container died 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 14:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7df67c32196da2f7d0c608b183061c56e5b42b5dd8082484f4d61839147e84e-merged.mount: Deactivated successfully.
Oct 01 14:06:49 compute-0 podman[300299]: 2025-10-01 14:06:49.007376173 +0000 UTC m=+1.247476841 container remove 65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 01 14:06:49 compute-0 systemd[1]: libpod-conmon-65f0cb3b6944790a7944e1685586ed04368367f3db89abecfef51db2c5d4e532.scope: Deactivated successfully.
Oct 01 14:06:49 compute-0 sudo[300191]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:06:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:06:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:06:49 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:06:49 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e1857238-f6ad-435e-93af-fd7e6175920e does not exist
Oct 01 14:06:49 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ebd20715-e0d2-4214-9826-ba46278ce540 does not exist
Oct 01 14:06:49 compute-0 sudo[300361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:06:49 compute-0 sudo[300361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:49 compute-0 sudo[300361]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:49 compute-0 sudo[300386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:06:49 compute-0 sudo[300386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:06:49 compute-0 sudo[300386]: pam_unix(sudo:session): session closed for user root
Oct 01 14:06:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:06:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:06:50 compute-0 ceph-mon[74802]: pgmap v1917: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:52 compute-0 ceph-mon[74802]: pgmap v1918: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:54 compute-0 ceph-mon[74802]: pgmap v1919: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:06:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/753787332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:06:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:06:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/753787332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:06:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/753787332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:06:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/753787332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:06:56 compute-0 ceph-mon[74802]: pgmap v1920: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:06:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:06:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:06:58 compute-0 ceph-mon[74802]: pgmap v1921: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:06:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:00 compute-0 ceph-mon[74802]: pgmap v1922: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:02 compute-0 ceph-mon[74802]: pgmap v1923: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:02 compute-0 podman[300414]: 2025-10-01 14:07:02.513027134 +0000 UTC m=+0.060511812 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:07:02 compute-0 podman[300413]: 2025-10-01 14:07:02.519165609 +0000 UTC m=+0.068651200 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct 01 14:07:02 compute-0 podman[300412]: 2025-10-01 14:07:02.523463046 +0000 UTC m=+0.071947636 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:07:02 compute-0 podman[300411]: 2025-10-01 14:07:02.55035929 +0000 UTC m=+0.098730266 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:07:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:04 compute-0 ceph-mon[74802]: pgmap v1924: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:06 compute-0 ceph-mon[74802]: pgmap v1925: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.902431) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327627902475, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1525, "num_deletes": 255, "total_data_size": 2456067, "memory_usage": 2500184, "flush_reason": "Manual Compaction"}
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327627921297, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 2411125, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37172, "largest_seqno": 38696, "table_properties": {"data_size": 2403968, "index_size": 4228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14334, "raw_average_key_size": 19, "raw_value_size": 2389745, "raw_average_value_size": 3260, "num_data_blocks": 189, "num_entries": 733, "num_filter_entries": 733, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327465, "oldest_key_time": 1759327465, "file_creation_time": 1759327627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 18920 microseconds, and 6859 cpu microseconds.
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.921352) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 2411125 bytes OK
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.921378) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.922893) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.922915) EVENT_LOG_v1 {"time_micros": 1759327627922908, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.922935) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2449409, prev total WAL file size 2449409, number of live WAL files 2.
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.924094) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323631' seq:72057594037927935, type:22 .. '6C6F676D0031353132' seq:0, type:0; will stop at (end)
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(2354KB)], [83(8923KB)]
Oct 01 14:07:07 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327627924146, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11549148, "oldest_snapshot_seqno": -1}
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6020 keys, 11446675 bytes, temperature: kUnknown
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327628014469, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 11446675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11402930, "index_size": 27571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 151817, "raw_average_key_size": 25, "raw_value_size": 11290685, "raw_average_value_size": 1875, "num_data_blocks": 1132, "num_entries": 6020, "num_filter_entries": 6020, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.014841) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 11446675 bytes
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.016571) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.7 rd, 126.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 8.7 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 6542, records dropped: 522 output_compression: NoCompression
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.016597) EVENT_LOG_v1 {"time_micros": 1759327628016585, "job": 48, "event": "compaction_finished", "compaction_time_micros": 90433, "compaction_time_cpu_micros": 44325, "output_level": 6, "num_output_files": 1, "total_output_size": 11446675, "num_input_records": 6542, "num_output_records": 6020, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327628017483, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327628020204, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:07.923982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:08 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:08.020333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:08 compute-0 ceph-mon[74802]: pgmap v1926: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:09 compute-0 ceph-mon[74802]: pgmap v1927: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:07:12.333 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:07:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:07:12.334 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:07:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:07:12.334 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:07:12 compute-0 ceph-mon[74802]: pgmap v1928: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:14 compute-0 ceph-mon[74802]: pgmap v1929: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:16 compute-0 ceph-mon[74802]: pgmap v1930: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:17 compute-0 nova_compute[260022]: 2025-10-01 14:07:17.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:07:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:07:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:07:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:07:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:07:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:07:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:07:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:18 compute-0 ceph-mon[74802]: pgmap v1931: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:20 compute-0 ceph-mon[74802]: pgmap v1932: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:22 compute-0 ceph-mon[74802]: pgmap v1933: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:23 compute-0 nova_compute[260022]: 2025-10-01 14:07:23.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:07:23 compute-0 nova_compute[260022]: 2025-10-01 14:07:23.381 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:07:23 compute-0 nova_compute[260022]: 2025-10-01 14:07:23.382 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:07:23 compute-0 nova_compute[260022]: 2025-10-01 14:07:23.382 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:07:23 compute-0 nova_compute[260022]: 2025-10-01 14:07:23.382 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:07:23 compute-0 nova_compute[260022]: 2025-10-01 14:07:23.383 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:07:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:23 compute-0 ceph-mon[74802]: pgmap v1934: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:07:23 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/420103665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:07:23 compute-0 nova_compute[260022]: 2025-10-01 14:07:23.810 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.029 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.030 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5019MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.030 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.030 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.107 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.121 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.122 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.122 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.170 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:07:24 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/420103665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:07:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:07:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990516151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.648 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.656 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.671 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.673 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:07:24 compute-0 nova_compute[260022]: 2025-10-01 14:07:24.674 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:07:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:25 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3990516151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:07:25 compute-0 ceph-mon[74802]: pgmap v1935: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:26 compute-0 nova_compute[260022]: 2025-10-01 14:07:26.674 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:07:26 compute-0 nova_compute[260022]: 2025-10-01 14:07:26.675 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:07:26 compute-0 nova_compute[260022]: 2025-10-01 14:07:26.675 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:07:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:28 compute-0 ceph-mon[74802]: pgmap v1936: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:30 compute-0 nova_compute[260022]: 2025-10-01 14:07:30.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:07:30 compute-0 ceph-mon[74802]: pgmap v1937: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:31 compute-0 nova_compute[260022]: 2025-10-01 14:07:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:07:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:32 compute-0 ceph-mon[74802]: pgmap v1938: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:33 compute-0 podman[300539]: 2025-10-01 14:07:33.541866777 +0000 UTC m=+0.090784554 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 14:07:33 compute-0 podman[300540]: 2025-10-01 14:07:33.564679262 +0000 UTC m=+0.107710662 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 01 14:07:33 compute-0 podman[300538]: 2025-10-01 14:07:33.570177436 +0000 UTC m=+0.122275713 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:07:33 compute-0 podman[300541]: 2025-10-01 14:07:33.570345862 +0000 UTC m=+0.104708146 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 01 14:07:34 compute-0 nova_compute[260022]: 2025-10-01 14:07:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:07:34 compute-0 nova_compute[260022]: 2025-10-01 14:07:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:07:34 compute-0 ceph-mon[74802]: pgmap v1939: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:35 compute-0 nova_compute[260022]: 2025-10-01 14:07:35.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:07:35 compute-0 nova_compute[260022]: 2025-10-01 14:07:35.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:07:35 compute-0 nova_compute[260022]: 2025-10-01 14:07:35.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:07:35 compute-0 nova_compute[260022]: 2025-10-01 14:07:35.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:07:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:36 compute-0 ceph-mon[74802]: pgmap v1940: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:37 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:07:37.753 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:07:37 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:07:37.755 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:07:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:38 compute-0 ceph-mon[74802]: pgmap v1941: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:38 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:07:38.757 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:07:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:40 compute-0 ceph-mon[74802]: pgmap v1942: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:42 compute-0 ceph-mon[74802]: pgmap v1943: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:43 compute-0 ceph-mon[74802]: pgmap v1944: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:46 compute-0 ceph-mon[74802]: pgmap v1945: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:07:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:07:47
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data']
Oct 01 14:07:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:07:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:07:48 compute-0 ceph-mon[74802]: pgmap v1946: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:49 compute-0 sudo[300617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:07:49 compute-0 sudo[300617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:49 compute-0 sudo[300617]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:49 compute-0 sudo[300642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:07:49 compute-0 sudo[300642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:49 compute-0 sudo[300642]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:49 compute-0 sudo[300667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:07:49 compute-0 sudo[300667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:49 compute-0 sudo[300667]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:49 compute-0 sudo[300692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:07:49 compute-0 sudo[300692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:50 compute-0 sudo[300692]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:07:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:07:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:07:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:07:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:07:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:07:50 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c1db3218-c9a5-4fb5-b550-fe746434fd69 does not exist
Oct 01 14:07:50 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5346077c-4de8-482b-a31b-a4e32446bb5e does not exist
Oct 01 14:07:50 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev d4a98774-84d3-4189-ac84-56a421040d62 does not exist
Oct 01 14:07:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:07:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:07:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:07:50 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:07:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:07:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:07:50 compute-0 sudo[300748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:07:50 compute-0 sudo[300748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:50 compute-0 sudo[300748]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:50 compute-0 sudo[300773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:07:50 compute-0 sudo[300773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:50 compute-0 sudo[300773]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:50 compute-0 sudo[300798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:07:50 compute-0 sudo[300798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:50 compute-0 sudo[300798]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:50 compute-0 ceph-mon[74802]: pgmap v1947: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:07:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:07:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:07:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:07:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:07:50 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:07:50 compute-0 sudo[300823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:07:50 compute-0 sudo[300823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:51 compute-0 podman[300888]: 2025-10-01 14:07:51.029972981 +0000 UTC m=+0.039577728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:07:51 compute-0 podman[300888]: 2025-10-01 14:07:51.151505209 +0000 UTC m=+0.161109956 container create 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:07:51 compute-0 systemd[1]: Started libpod-conmon-6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff.scope.
Oct 01 14:07:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:07:51 compute-0 podman[300888]: 2025-10-01 14:07:51.370413529 +0000 UTC m=+0.380018266 container init 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 01 14:07:51 compute-0 podman[300888]: 2025-10-01 14:07:51.383204785 +0000 UTC m=+0.392809492 container start 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 14:07:51 compute-0 systemd[1]: libpod-6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff.scope: Deactivated successfully.
Oct 01 14:07:51 compute-0 peaceful_heisenberg[300905]: 167 167
Oct 01 14:07:51 compute-0 conmon[300905]: conmon 6abb8d761bc8664a1492 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff.scope/container/memory.events
Oct 01 14:07:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:51 compute-0 podman[300888]: 2025-10-01 14:07:51.452259256 +0000 UTC m=+0.461863983 container attach 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 01 14:07:51 compute-0 podman[300888]: 2025-10-01 14:07:51.454109976 +0000 UTC m=+0.463714723 container died 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:07:51 compute-0 ceph-mon[74802]: pgmap v1948: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bc401df216f4f41e2b4fcd4aec341e32085fa744f30475eae1fb137dbcf7557-merged.mount: Deactivated successfully.
Oct 01 14:07:52 compute-0 podman[300888]: 2025-10-01 14:07:52.013309738 +0000 UTC m=+1.022914455 container remove 6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_heisenberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 14:07:52 compute-0 systemd[1]: libpod-conmon-6abb8d761bc8664a14920733edfd78f1c61fd7062267d3edae23a444edbd08ff.scope: Deactivated successfully.
Oct 01 14:07:52 compute-0 podman[300929]: 2025-10-01 14:07:52.2584609 +0000 UTC m=+0.059183259 container create 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct 01 14:07:52 compute-0 systemd[1]: Started libpod-conmon-77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77.scope.
Oct 01 14:07:52 compute-0 podman[300929]: 2025-10-01 14:07:52.230189484 +0000 UTC m=+0.030911893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:07:52 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:52 compute-0 podman[300929]: 2025-10-01 14:07:52.368015898 +0000 UTC m=+0.168738227 container init 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct 01 14:07:52 compute-0 podman[300929]: 2025-10-01 14:07:52.38694567 +0000 UTC m=+0.187668029 container start 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:07:52 compute-0 podman[300929]: 2025-10-01 14:07:52.426116933 +0000 UTC m=+0.226839272 container attach 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:07:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.923011) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327672923038, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 591, "num_deletes": 250, "total_data_size": 634701, "memory_usage": 644736, "flush_reason": "Manual Compaction"}
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327672934464, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 418118, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38697, "largest_seqno": 39287, "table_properties": {"data_size": 415324, "index_size": 766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7550, "raw_average_key_size": 20, "raw_value_size": 409540, "raw_average_value_size": 1109, "num_data_blocks": 35, "num_entries": 369, "num_filter_entries": 369, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327628, "oldest_key_time": 1759327628, "file_creation_time": 1759327672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 11505 microseconds, and 2062 cpu microseconds.
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.934511) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 418118 bytes OK
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.934532) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937196) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937213) EVENT_LOG_v1 {"time_micros": 1759327672937207, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 631466, prev total WAL file size 631466, number of live WAL files 2.
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937983) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(408KB)], [86(10MB)]
Oct 01 14:07:52 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327672938099, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11864793, "oldest_snapshot_seqno": -1}
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5896 keys, 8787099 bytes, temperature: kUnknown
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327673002310, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 8787099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8748509, "index_size": 22736, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 149426, "raw_average_key_size": 25, "raw_value_size": 8642713, "raw_average_value_size": 1465, "num_data_blocks": 932, "num_entries": 5896, "num_filter_entries": 5896, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.002574) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 8787099 bytes
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.005367) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.5 rd, 136.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(49.4) write-amplify(21.0) OK, records in: 6389, records dropped: 493 output_compression: NoCompression
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.005429) EVENT_LOG_v1 {"time_micros": 1759327673005390, "job": 50, "event": "compaction_finished", "compaction_time_micros": 64295, "compaction_time_cpu_micros": 22058, "output_level": 6, "num_output_files": 1, "total_output_size": 8787099, "num_input_records": 6389, "num_output_records": 5896, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327673005839, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327673007566, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:52.937922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:53 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:07:53.007838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:07:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:53 compute-0 silly_chatterjee[300946]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:07:53 compute-0 silly_chatterjee[300946]: --> relative data size: 1.0
Oct 01 14:07:53 compute-0 silly_chatterjee[300946]: --> All data devices are unavailable
Oct 01 14:07:53 compute-0 systemd[1]: libpod-77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77.scope: Deactivated successfully.
Oct 01 14:07:53 compute-0 systemd[1]: libpod-77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77.scope: Consumed 1.074s CPU time.
Oct 01 14:07:53 compute-0 podman[300929]: 2025-10-01 14:07:53.510696404 +0000 UTC m=+1.311418763 container died 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 01 14:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cc15645890cf136a5182f0fe6cacf8c8362930b2ef394d9deb83526fd2b9e0c-merged.mount: Deactivated successfully.
Oct 01 14:07:53 compute-0 podman[300929]: 2025-10-01 14:07:53.765578246 +0000 UTC m=+1.566300605 container remove 77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:07:53 compute-0 systemd[1]: libpod-conmon-77754da693a83d288f83b9a3e0ef8f7be0168cdfa0d89571e75dd6fae241ee77.scope: Deactivated successfully.
Oct 01 14:07:53 compute-0 sudo[300823]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:53 compute-0 sudo[300987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:07:53 compute-0 sudo[300987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:53 compute-0 sudo[300987]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:53 compute-0 ceph-mon[74802]: pgmap v1949: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:54 compute-0 sudo[301012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:07:54 compute-0 sudo[301012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:54 compute-0 sudo[301012]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:54 compute-0 sudo[301037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:07:54 compute-0 sudo[301037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:54 compute-0 sudo[301037]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:54 compute-0 sudo[301062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:07:54 compute-0 sudo[301062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:54 compute-0 podman[301128]: 2025-10-01 14:07:54.546172676 +0000 UTC m=+0.034028572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:07:54 compute-0 podman[301128]: 2025-10-01 14:07:54.71483281 +0000 UTC m=+0.202688686 container create e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:07:54 compute-0 systemd[1]: Started libpod-conmon-e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d.scope.
Oct 01 14:07:54 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:07:54 compute-0 podman[301128]: 2025-10-01 14:07:54.900909538 +0000 UTC m=+0.388765484 container init e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 14:07:54 compute-0 podman[301128]: 2025-10-01 14:07:54.912492435 +0000 UTC m=+0.400348311 container start e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 14:07:54 compute-0 magical_mayer[301144]: 167 167
Oct 01 14:07:54 compute-0 systemd[1]: libpod-e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d.scope: Deactivated successfully.
Oct 01 14:07:54 compute-0 podman[301128]: 2025-10-01 14:07:54.943148609 +0000 UTC m=+0.431004495 container attach e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 14:07:54 compute-0 podman[301128]: 2025-10-01 14:07:54.943974094 +0000 UTC m=+0.431829980 container died e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 14:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b80d57f83e9a604e03ba176006739446e6fee0a0bb4ec2c6002521f92cd35107-merged.mount: Deactivated successfully.
Oct 01 14:07:55 compute-0 podman[301128]: 2025-10-01 14:07:55.17095368 +0000 UTC m=+0.658809526 container remove e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 14:07:55 compute-0 systemd[1]: libpod-conmon-e48c288c1e77a7210583b4924c47735d8d704cbc0b1dda1f680fb0738c18544d.scope: Deactivated successfully.
Oct 01 14:07:55 compute-0 podman[301170]: 2025-10-01 14:07:55.346514773 +0000 UTC m=+0.037369106 container create 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:07:55 compute-0 systemd[1]: Started libpod-conmon-564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947.scope.
Oct 01 14:07:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:55 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:55 compute-0 podman[301170]: 2025-10-01 14:07:55.328108989 +0000 UTC m=+0.018963352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:07:55 compute-0 podman[301170]: 2025-10-01 14:07:55.45757596 +0000 UTC m=+0.148430303 container init 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:07:55 compute-0 podman[301170]: 2025-10-01 14:07:55.46859801 +0000 UTC m=+0.159452373 container start 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:07:55 compute-0 podman[301170]: 2025-10-01 14:07:55.475990034 +0000 UTC m=+0.166844387 container attach 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]: {
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:     "0": [
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:         {
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "devices": [
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "/dev/loop3"
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             ],
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_name": "ceph_lv0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_size": "21470642176",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "name": "ceph_lv0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "tags": {
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.cluster_name": "ceph",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.crush_device_class": "",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.encrypted": "0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.osd_id": "0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.type": "block",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.vdo": "0"
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             },
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "type": "block",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "vg_name": "ceph_vg0"
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:         }
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:     ],
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:     "1": [
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:         {
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "devices": [
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "/dev/loop4"
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             ],
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_name": "ceph_lv1",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_size": "21470642176",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "name": "ceph_lv1",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "tags": {
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.cluster_name": "ceph",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.crush_device_class": "",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.encrypted": "0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.osd_id": "1",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.type": "block",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.vdo": "0"
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             },
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "type": "block",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "vg_name": "ceph_vg1"
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:         }
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:     ],
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:     "2": [
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:         {
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "devices": [
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "/dev/loop5"
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             ],
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_name": "ceph_lv2",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_size": "21470642176",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "name": "ceph_lv2",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "tags": {
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.cluster_name": "ceph",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.crush_device_class": "",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.encrypted": "0",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.osd_id": "2",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.type": "block",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:                 "ceph.vdo": "0"
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             },
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "type": "block",
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:             "vg_name": "ceph_vg2"
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:         }
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]:     ]
Oct 01 14:07:56 compute-0 confident_dijkstra[301187]: }
Oct 01 14:07:56 compute-0 systemd[1]: libpod-564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947.scope: Deactivated successfully.
Oct 01 14:07:56 compute-0 conmon[301187]: conmon 564731c42090395f4d72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947.scope/container/memory.events
Oct 01 14:07:56 compute-0 podman[301170]: 2025-10-01 14:07:56.276214187 +0000 UTC m=+0.967068550 container died 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d445d6088029acdf79cf27bce05e34f88475366d403d69c68e3d1514bd15f61e-merged.mount: Deactivated successfully.
Oct 01 14:07:56 compute-0 podman[301170]: 2025-10-01 14:07:56.344227047 +0000 UTC m=+1.035081380 container remove 564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:07:56 compute-0 systemd[1]: libpod-conmon-564731c42090395f4d72ae0a12b437780c6888967108a54bb45167376fe9f947.scope: Deactivated successfully.
Oct 01 14:07:56 compute-0 sudo[301062]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:56 compute-0 ceph-mon[74802]: pgmap v1950: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:56 compute-0 sudo[301209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:07:56 compute-0 sudo[301209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:56 compute-0 sudo[301209]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:56 compute-0 sudo[301234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:07:56 compute-0 sudo[301234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:56 compute-0 sudo[301234]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:56 compute-0 sudo[301259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:07:56 compute-0 sudo[301259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:56 compute-0 sudo[301259]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:56 compute-0 sudo[301284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:07:56 compute-0 sudo[301284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:57 compute-0 podman[301347]: 2025-10-01 14:07:57.088503194 +0000 UTC m=+0.031798039 container create f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 14:07:57 compute-0 systemd[1]: Started libpod-conmon-f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e.scope.
Oct 01 14:07:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:07:57 compute-0 podman[301347]: 2025-10-01 14:07:57.15763699 +0000 UTC m=+0.100931905 container init f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 14:07:57 compute-0 podman[301347]: 2025-10-01 14:07:57.164493167 +0000 UTC m=+0.107788012 container start f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:07:57 compute-0 podman[301347]: 2025-10-01 14:07:57.168019569 +0000 UTC m=+0.111314414 container attach f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 14:07:57 compute-0 stupefied_gould[301363]: 167 167
Oct 01 14:07:57 compute-0 systemd[1]: libpod-f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e.scope: Deactivated successfully.
Oct 01 14:07:57 compute-0 podman[301347]: 2025-10-01 14:07:57.075242034 +0000 UTC m=+0.018536899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:07:57 compute-0 conmon[301363]: conmon f2a8d08e5e8c4065890b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e.scope/container/memory.events
Oct 01 14:07:57 compute-0 podman[301347]: 2025-10-01 14:07:57.172683827 +0000 UTC m=+0.115978672 container died f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-83b9d00628243f06cbcd1a3dfa557858e2e02e9549d34f8b55f794803bb70106-merged.mount: Deactivated successfully.
Oct 01 14:07:57 compute-0 podman[301347]: 2025-10-01 14:07:57.21184216 +0000 UTC m=+0.155137005 container remove f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 14:07:57 compute-0 systemd[1]: libpod-conmon-f2a8d08e5e8c4065890b47bbc9ba09d4a02ef52d104f7359fc51bd56b862780e.scope: Deactivated successfully.
Oct 01 14:07:57 compute-0 podman[301386]: 2025-10-01 14:07:57.380849826 +0000 UTC m=+0.050586457 container create a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:57 compute-0 systemd[1]: Started libpod-conmon-a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333.scope.
Oct 01 14:07:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:07:57 compute-0 podman[301386]: 2025-10-01 14:07:57.357338159 +0000 UTC m=+0.027074870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:07:57 compute-0 podman[301386]: 2025-10-01 14:07:57.475921744 +0000 UTC m=+0.145658425 container init a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:07:57 compute-0 podman[301386]: 2025-10-01 14:07:57.487280894 +0000 UTC m=+0.157017565 container start a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:07:57 compute-0 podman[301386]: 2025-10-01 14:07:57.490615331 +0000 UTC m=+0.160352012 container attach a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:07:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:07:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:07:58 compute-0 ceph-mon[74802]: pgmap v1951: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:58 compute-0 vigorous_bell[301402]: {
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "osd_id": 0,
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "type": "bluestore"
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:     },
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "osd_id": 2,
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "type": "bluestore"
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:     },
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "osd_id": 1,
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:         "type": "bluestore"
Oct 01 14:07:58 compute-0 vigorous_bell[301402]:     }
Oct 01 14:07:58 compute-0 vigorous_bell[301402]: }
Oct 01 14:07:58 compute-0 systemd[1]: libpod-a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333.scope: Deactivated successfully.
Oct 01 14:07:58 compute-0 systemd[1]: libpod-a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333.scope: Consumed 1.066s CPU time.
Oct 01 14:07:58 compute-0 podman[301435]: 2025-10-01 14:07:58.591927452 +0000 UTC m=+0.033320179 container died a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b74907412cb643d7d798fa39de07f9e3afc15cb3e933888c83d345ac389c0bfe-merged.mount: Deactivated successfully.
Oct 01 14:07:58 compute-0 podman[301435]: 2025-10-01 14:07:58.654152068 +0000 UTC m=+0.095544785 container remove a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:07:58 compute-0 systemd[1]: libpod-conmon-a05d9d7deb4a69a617d261fd65fd71c147a242e01e8434620573a9bbc33c9333.scope: Deactivated successfully.
Oct 01 14:07:58 compute-0 sudo[301284]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:07:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:07:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:07:58 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:07:58 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev be778427-6e6f-4ef1-8e98-b2fb8dafd97c does not exist
Oct 01 14:07:58 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 6a9ca557-7d73-4090-9880-54ce7ba70e5e does not exist
Oct 01 14:07:58 compute-0 sudo[301450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:07:58 compute-0 sudo[301450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:58 compute-0 sudo[301450]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:58 compute-0 sudo[301475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:07:58 compute-0 sudo[301475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:07:58 compute-0 sudo[301475]: pam_unix(sudo:session): session closed for user root
Oct 01 14:07:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:07:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:07:59 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:07:59 compute-0 ceph-mon[74802]: pgmap v1952: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:08:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 8675 writes, 39K keys, 8675 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8675 writes, 8675 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1316 writes, 6220 keys, 1316 commit groups, 1.0 writes per commit group, ingest: 8.59 MB, 0.01 MB/s
                                           Interval WAL: 1316 writes, 1316 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.5      2.60              0.19        25    0.104       0      0       0.0       0.0
                                             L6      1/0    8.38 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9     55.4     45.8      4.11              0.72        24    0.171    125K    13K       0.0       0.0
                                            Sum      1/0    8.38 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     34.0     35.2      6.70              0.92        49    0.137    125K    13K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9     69.6     69.1      0.92              0.26        12    0.076     37K   3074       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     55.4     45.8      4.11              0.72        24    0.171    125K    13K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.5      2.58              0.19        24    0.108       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.047, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.07 MB/s write, 0.22 GB read, 0.06 MB/s read, 6.7 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 304.00 MB usage: 25.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000153 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1644,24.58 MB,8.087%) FilterBlock(50,355.05 KB,0.114054%) IndexBlock(50,627.59 KB,0.201607%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 14:08:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:02 compute-0 ceph-mon[74802]: pgmap v1953: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.924100) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682924131, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 344, "num_deletes": 251, "total_data_size": 198781, "memory_usage": 205576, "flush_reason": "Manual Compaction"}
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682927130, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 198538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39288, "largest_seqno": 39631, "table_properties": {"data_size": 196314, "index_size": 388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4411, "raw_average_key_size": 14, "raw_value_size": 192005, "raw_average_value_size": 650, "num_data_blocks": 16, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327673, "oldest_key_time": 1759327673, "file_creation_time": 1759327682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 3087 microseconds, and 1476 cpu microseconds.
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.927184) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 198538 bytes OK
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.927205) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929012) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929035) EVENT_LOG_v1 {"time_micros": 1759327682929027, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929056) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 196434, prev total WAL file size 196434, number of live WAL files 2.
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929527) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(193KB)], [89(8581KB)]
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682929767, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 8985637, "oldest_snapshot_seqno": -1}
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5678 keys, 8270879 bytes, temperature: kUnknown
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682986939, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 8270879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8233842, "index_size": 21759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 146688, "raw_average_key_size": 25, "raw_value_size": 8131819, "raw_average_value_size": 1432, "num_data_blocks": 872, "num_entries": 5678, "num_filter_entries": 5678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.987172) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 8270879 bytes
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.988351) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.2 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.4 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(86.9) write-amplify(41.7) OK, records in: 6191, records dropped: 513 output_compression: NoCompression
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.988370) EVENT_LOG_v1 {"time_micros": 1759327682988361, "job": 52, "event": "compaction_finished", "compaction_time_micros": 57173, "compaction_time_cpu_micros": 38048, "output_level": 6, "num_output_files": 1, "total_output_size": 8270879, "num_input_records": 6191, "num_output_records": 5678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682988523, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327682990271, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.929444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:02 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:02.990417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:03 compute-0 ceph-mon[74802]: pgmap v1954: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:04 compute-0 podman[301501]: 2025-10-01 14:08:04.538460552 +0000 UTC m=+0.080079063 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:08:04 compute-0 podman[301503]: 2025-10-01 14:08:04.538655228 +0000 UTC m=+0.069407464 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 14:08:04 compute-0 podman[301502]: 2025-10-01 14:08:04.541648903 +0000 UTC m=+0.075267310 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid)
Oct 01 14:08:04 compute-0 podman[301500]: 2025-10-01 14:08:04.574050161 +0000 UTC m=+0.115232278 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0)
Oct 01 14:08:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:06 compute-0 ceph-mon[74802]: pgmap v1955: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:08 compute-0 ceph-mon[74802]: pgmap v1956: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:10 compute-0 ceph-mon[74802]: pgmap v1957: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:08:12.334 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:08:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:08:12.335 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:08:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:08:12.335 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:08:12 compute-0 ceph-mon[74802]: pgmap v1958: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:13 compute-0 ceph-mon[74802]: pgmap v1959: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.559646) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693559682, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 336, "num_deletes": 251, "total_data_size": 179349, "memory_usage": 187000, "flush_reason": "Manual Compaction"}
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693563186, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 177958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39632, "largest_seqno": 39967, "table_properties": {"data_size": 175830, "index_size": 292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5313, "raw_average_key_size": 18, "raw_value_size": 171719, "raw_average_value_size": 596, "num_data_blocks": 13, "num_entries": 288, "num_filter_entries": 288, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327683, "oldest_key_time": 1759327683, "file_creation_time": 1759327693, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 3570 microseconds, and 993 cpu microseconds.
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.563220) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 177958 bytes OK
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.563240) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565190) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565202) EVENT_LOG_v1 {"time_micros": 1759327693565199, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 177034, prev total WAL file size 177034, number of live WAL files 2.
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565642) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(173KB)], [92(8077KB)]
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693565719, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 8448837, "oldest_snapshot_seqno": -1}
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5457 keys, 6715079 bytes, temperature: kUnknown
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693610535, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 6715079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6680996, "index_size": 19317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 142782, "raw_average_key_size": 26, "raw_value_size": 6584263, "raw_average_value_size": 1206, "num_data_blocks": 759, "num_entries": 5457, "num_filter_entries": 5457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327693, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.610928) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 6715079 bytes
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.612558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.1 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.9 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(85.2) write-amplify(37.7) OK, records in: 5966, records dropped: 509 output_compression: NoCompression
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.612584) EVENT_LOG_v1 {"time_micros": 1759327693612572, "job": 54, "event": "compaction_finished", "compaction_time_micros": 44926, "compaction_time_cpu_micros": 19105, "output_level": 6, "num_output_files": 1, "total_output_size": 6715079, "num_input_records": 5966, "num_output_records": 5457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693612972, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327693615091, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.565537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:08:13.615220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:08:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct 01 14:08:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct 01 14:08:14 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct 01 14:08:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:15 compute-0 ceph-mon[74802]: osdmap e187: 3 total, 3 up, 3 in
Oct 01 14:08:15 compute-0 ceph-mon[74802]: pgmap v1961: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:17 compute-0 nova_compute[260022]: 2025-10-01 14:08:17.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 18 op/s
Oct 01 14:08:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:08:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:08:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:08:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:08:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:08:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:08:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:18 compute-0 ceph-mon[74802]: pgmap v1962: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 18 op/s
Oct 01 14:08:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:08:20 compute-0 ceph-mon[74802]: pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:08:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:08:22 compute-0 ceph-mon[74802]: pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:08:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct 01 14:08:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct 01 14:08:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct 01 14:08:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 01 14:08:24 compute-0 ceph-mon[74802]: osdmap e188: 3 total, 3 up, 3 in
Oct 01 14:08:24 compute-0 ceph-mon[74802]: pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 01 14:08:24 compute-0 nova_compute[260022]: 2025-10-01 14:08:24.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:24 compute-0 nova_compute[260022]: 2025-10-01 14:08:24.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:08:24 compute-0 nova_compute[260022]: 2025-10-01 14:08:24.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:08:24 compute-0 nova_compute[260022]: 2025-10-01 14:08:24.378 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:08:24 compute-0 nova_compute[260022]: 2025-10-01 14:08:24.379 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:08:24 compute-0 nova_compute[260022]: 2025-10-01 14:08:24.379 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:08:24 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:08:24 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071918506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:08:24 compute-0 nova_compute[260022]: 2025-10-01 14:08:24.854 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.019 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.021 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5054MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.021 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.022 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.109 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.126 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.127 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.127 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.180 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:08:25 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2071918506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:08:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:08:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:08:25 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1608409573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.579 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.584 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.608 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.610 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:08:25 compute-0 nova_compute[260022]: 2025-10-01 14:08:25.610 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:08:26 compute-0 ceph-mon[74802]: pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:08:26 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1608409573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:08:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 818 B/s wr, 6 op/s
Oct 01 14:08:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:28 compute-0 ceph-mon[74802]: pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 818 B/s wr, 6 op/s
Oct 01 14:08:28 compute-0 nova_compute[260022]: 2025-10-01 14:08:28.611 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:28 compute-0 nova_compute[260022]: 2025-10-01 14:08:28.612 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:28 compute-0 nova_compute[260022]: 2025-10-01 14:08:28.612 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:08:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:30 compute-0 nova_compute[260022]: 2025-10-01 14:08:30.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:30 compute-0 ceph-mon[74802]: pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:32 compute-0 ceph-mon[74802]: pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:33 compute-0 nova_compute[260022]: 2025-10-01 14:08:33.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:34 compute-0 nova_compute[260022]: 2025-10-01 14:08:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:34 compute-0 ceph-mon[74802]: pgmap v1971: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:35 compute-0 podman[301627]: 2025-10-01 14:08:35.53480088 +0000 UTC m=+0.081764327 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 01 14:08:35 compute-0 podman[301628]: 2025-10-01 14:08:35.535055258 +0000 UTC m=+0.077378077 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 14:08:35 compute-0 podman[301626]: 2025-10-01 14:08:35.538444776 +0000 UTC m=+0.088199162 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3)
Oct 01 14:08:35 compute-0 podman[301625]: 2025-10-01 14:08:35.578970482 +0000 UTC m=+0.127722136 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 01 14:08:36 compute-0 nova_compute[260022]: 2025-10-01 14:08:36.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:36 compute-0 nova_compute[260022]: 2025-10-01 14:08:36.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:08:36 compute-0 nova_compute[260022]: 2025-10-01 14:08:36.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:08:36 compute-0 nova_compute[260022]: 2025-10-01 14:08:36.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:08:36 compute-0 nova_compute[260022]: 2025-10-01 14:08:36.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:36 compute-0 ceph-mon[74802]: pgmap v1972: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:37 compute-0 ceph-mon[74802]: pgmap v1973: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:40 compute-0 ceph-mon[74802]: pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:42 compute-0 ceph-mon[74802]: pgmap v1975: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:43 compute-0 ceph-mon[74802]: pgmap v1976: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:46 compute-0 ceph-mon[74802]: pgmap v1977: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:08:47
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'vms', 'backups']
Oct 01 14:08:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:08:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:08:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:08:48 compute-0 ceph-mon[74802]: pgmap v1978: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:50 compute-0 ceph-mon[74802]: pgmap v1979: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:51 compute-0 ceph-mon[74802]: pgmap v1980: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:53 compute-0 nova_compute[260022]: 2025-10-01 14:08:53.356 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:08:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:53 compute-0 sshd-session[301705]: Invalid user kevin from 80.94.95.116 port 59174
Oct 01 14:08:53 compute-0 sshd-session[301705]: pam_unix(sshd:auth): check pass; user unknown
Oct 01 14:08:53 compute-0 sshd-session[301705]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.95.116
Oct 01 14:08:54 compute-0 ceph-mon[74802]: pgmap v1981: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:55 compute-0 sshd-session[301705]: Failed password for invalid user kevin from 80.94.95.116 port 59174 ssh2
Oct 01 14:08:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:08:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1433003486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:08:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:08:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1433003486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:08:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1433003486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:08:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1433003486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:08:55 compute-0 sshd-session[301705]: Connection closed by invalid user kevin 80.94.95.116 port 59174 [preauth]
Oct 01 14:08:56 compute-0 ceph-mon[74802]: pgmap v1982: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:08:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:08:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:08:58 compute-0 ceph-mon[74802]: pgmap v1983: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:58 compute-0 sudo[301707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:08:58 compute-0 sudo[301707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:08:58 compute-0 sudo[301707]: pam_unix(sudo:session): session closed for user root
Oct 01 14:08:59 compute-0 sudo[301732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:08:59 compute-0 sudo[301732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:08:59 compute-0 sudo[301732]: pam_unix(sudo:session): session closed for user root
Oct 01 14:08:59 compute-0 sudo[301757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:08:59 compute-0 sudo[301757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:08:59 compute-0 sudo[301757]: pam_unix(sudo:session): session closed for user root
Oct 01 14:08:59 compute-0 sudo[301782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:08:59 compute-0 sudo[301782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:08:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:08:59 compute-0 sudo[301782]: pam_unix(sudo:session): session closed for user root
Oct 01 14:08:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:08:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:08:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:08:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:08:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:08:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:08:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f03d1003-6c34-4fcc-aeb8-03f77958a6dc does not exist
Oct 01 14:08:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a222cb74-fdd4-4886-8a37-7010d046a576 does not exist
Oct 01 14:08:59 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 52999477-a769-40fd-bd40-f4ea37737ed7 does not exist
Oct 01 14:08:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:08:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:08:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:08:59 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:08:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:08:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:08:59 compute-0 sudo[301840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:08:59 compute-0 sudo[301840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:08:59 compute-0 sudo[301840]: pam_unix(sudo:session): session closed for user root
Oct 01 14:08:59 compute-0 sudo[301865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:08:59 compute-0 sudo[301865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:08:59 compute-0 sudo[301865]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:00 compute-0 sudo[301890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:09:00 compute-0 sudo[301890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:00 compute-0 sudo[301890]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:00 compute-0 sudo[301915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:09:00 compute-0 sudo[301915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:00 compute-0 ceph-mon[74802]: pgmap v1984: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:09:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:09:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:09:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:09:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:09:00 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:09:00 compute-0 podman[301980]: 2025-10-01 14:09:00.64108262 +0000 UTC m=+0.073617848 container create 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct 01 14:09:00 compute-0 systemd[1]: Started libpod-conmon-1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745.scope.
Oct 01 14:09:00 compute-0 podman[301980]: 2025-10-01 14:09:00.613485394 +0000 UTC m=+0.046020672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:09:00 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:09:00 compute-0 podman[301980]: 2025-10-01 14:09:00.755311727 +0000 UTC m=+0.187846965 container init 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:09:00 compute-0 podman[301980]: 2025-10-01 14:09:00.768518946 +0000 UTC m=+0.201054164 container start 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 01 14:09:00 compute-0 podman[301980]: 2025-10-01 14:09:00.773431292 +0000 UTC m=+0.205966500 container attach 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:09:00 compute-0 angry_almeida[301996]: 167 167
Oct 01 14:09:00 compute-0 systemd[1]: libpod-1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745.scope: Deactivated successfully.
Oct 01 14:09:00 compute-0 conmon[301996]: conmon 1ac31d5edab883412e72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745.scope/container/memory.events
Oct 01 14:09:00 compute-0 podman[301980]: 2025-10-01 14:09:00.777713628 +0000 UTC m=+0.210248826 container died 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 01 14:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-79d046bd287249e3accf87c6d3b80967ce5abf3f30a9360863a75c23df7d67db-merged.mount: Deactivated successfully.
Oct 01 14:09:00 compute-0 podman[301980]: 2025-10-01 14:09:00.832831477 +0000 UTC m=+0.265366675 container remove 1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 14:09:00 compute-0 systemd[1]: libpod-conmon-1ac31d5edab883412e72093289d65790f33d81a709a2245b7695afd7cb8c4745.scope: Deactivated successfully.
Oct 01 14:09:01 compute-0 podman[302020]: 2025-10-01 14:09:01.042814394 +0000 UTC m=+0.062293289 container create 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:09:01 compute-0 systemd[1]: Started libpod-conmon-5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf.scope.
Oct 01 14:09:01 compute-0 podman[302020]: 2025-10-01 14:09:01.019683909 +0000 UTC m=+0.039162824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:09:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:01 compute-0 podman[302020]: 2025-10-01 14:09:01.155670177 +0000 UTC m=+0.175149102 container init 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:09:01 compute-0 podman[302020]: 2025-10-01 14:09:01.167263574 +0000 UTC m=+0.186742439 container start 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 14:09:01 compute-0 podman[302020]: 2025-10-01 14:09:01.171559621 +0000 UTC m=+0.191038486 container attach 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 14:09:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:02 compute-0 pedantic_goodall[302036]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:09:02 compute-0 pedantic_goodall[302036]: --> relative data size: 1.0
Oct 01 14:09:02 compute-0 pedantic_goodall[302036]: --> All data devices are unavailable
Oct 01 14:09:02 compute-0 systemd[1]: libpod-5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf.scope: Deactivated successfully.
Oct 01 14:09:02 compute-0 systemd[1]: libpod-5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf.scope: Consumed 1.117s CPU time.
Oct 01 14:09:02 compute-0 podman[302020]: 2025-10-01 14:09:02.329973265 +0000 UTC m=+1.349452170 container died 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 01 14:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8de3896bdf1fb6ad6a78a21af54d005785e226b9da1920c7949717473ab8d131-merged.mount: Deactivated successfully.
Oct 01 14:09:02 compute-0 podman[302020]: 2025-10-01 14:09:02.396456346 +0000 UTC m=+1.415935211 container remove 5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:09:02 compute-0 systemd[1]: libpod-conmon-5e5b03a311f84d179679e9b4337d7484d55948703fc921582d291c22a6c6dcaf.scope: Deactivated successfully.
Oct 01 14:09:02 compute-0 sudo[301915]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:02 compute-0 sudo[302079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:09:02 compute-0 sudo[302079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:02 compute-0 sudo[302079]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:02 compute-0 ceph-mon[74802]: pgmap v1985: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:02 compute-0 sudo[302104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:09:02 compute-0 sudo[302104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:02 compute-0 sudo[302104]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:02 compute-0 sudo[302129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:09:02 compute-0 sudo[302129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:02 compute-0 sudo[302129]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:02 compute-0 sudo[302154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:09:02 compute-0 sudo[302154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:03 compute-0 podman[302221]: 2025-10-01 14:09:03.161199043 +0000 UTC m=+0.053188749 container create 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:09:03 compute-0 systemd[1]: Started libpod-conmon-0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5.scope.
Oct 01 14:09:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:09:03 compute-0 podman[302221]: 2025-10-01 14:09:03.222981445 +0000 UTC m=+0.114971181 container init 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 01 14:09:03 compute-0 podman[302221]: 2025-10-01 14:09:03.137994036 +0000 UTC m=+0.029983802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:09:03 compute-0 podman[302221]: 2025-10-01 14:09:03.23479611 +0000 UTC m=+0.126785826 container start 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:09:03 compute-0 podman[302221]: 2025-10-01 14:09:03.238283271 +0000 UTC m=+0.130273007 container attach 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 14:09:03 compute-0 competent_lederberg[302238]: 167 167
Oct 01 14:09:03 compute-0 systemd[1]: libpod-0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5.scope: Deactivated successfully.
Oct 01 14:09:03 compute-0 podman[302221]: 2025-10-01 14:09:03.242143303 +0000 UTC m=+0.134133029 container died 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 14:09:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2b2b417f1d6b31b7ab7c7de45fddc832d841d0705e011ef864cee7b431bb9fd-merged.mount: Deactivated successfully.
Oct 01 14:09:03 compute-0 podman[302221]: 2025-10-01 14:09:03.28678161 +0000 UTC m=+0.178771356 container remove 0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:09:03 compute-0 systemd[1]: libpod-conmon-0ee479bc96d229b0a46b485dadaf82bc12b8550a60dfe9412b49f23497c1e2a5.scope: Deactivated successfully.
Oct 01 14:09:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:03 compute-0 podman[302260]: 2025-10-01 14:09:03.474368185 +0000 UTC m=+0.055387389 container create cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 14:09:03 compute-0 systemd[1]: Started libpod-conmon-cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3.scope.
Oct 01 14:09:03 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:03 compute-0 podman[302260]: 2025-10-01 14:09:03.452968046 +0000 UTC m=+0.033987280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:09:03 compute-0 podman[302260]: 2025-10-01 14:09:03.554670515 +0000 UTC m=+0.135689799 container init cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:09:03 compute-0 podman[302260]: 2025-10-01 14:09:03.561302745 +0000 UTC m=+0.142321979 container start cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:09:03 compute-0 podman[302260]: 2025-10-01 14:09:03.566133409 +0000 UTC m=+0.147152643 container attach cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:09:04 compute-0 nostalgic_black[302276]: {
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:     "0": [
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:         {
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "devices": [
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "/dev/loop3"
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             ],
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_name": "ceph_lv0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_size": "21470642176",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "name": "ceph_lv0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "tags": {
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.cluster_name": "ceph",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.crush_device_class": "",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.encrypted": "0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.osd_id": "0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.type": "block",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.vdo": "0"
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             },
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "type": "block",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "vg_name": "ceph_vg0"
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:         }
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:     ],
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:     "1": [
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:         {
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "devices": [
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "/dev/loop4"
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             ],
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_name": "ceph_lv1",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_size": "21470642176",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "name": "ceph_lv1",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "tags": {
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.cluster_name": "ceph",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.crush_device_class": "",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.encrypted": "0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.osd_id": "1",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.type": "block",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.vdo": "0"
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             },
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "type": "block",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "vg_name": "ceph_vg1"
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:         }
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:     ],
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:     "2": [
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:         {
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "devices": [
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "/dev/loop5"
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             ],
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_name": "ceph_lv2",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_size": "21470642176",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "name": "ceph_lv2",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "tags": {
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.cluster_name": "ceph",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.crush_device_class": "",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.encrypted": "0",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.osd_id": "2",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.type": "block",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:                 "ceph.vdo": "0"
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             },
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "type": "block",
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:             "vg_name": "ceph_vg2"
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:         }
Oct 01 14:09:04 compute-0 nostalgic_black[302276]:     ]
Oct 01 14:09:04 compute-0 nostalgic_black[302276]: }
Oct 01 14:09:04 compute-0 systemd[1]: libpod-cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3.scope: Deactivated successfully.
Oct 01 14:09:04 compute-0 podman[302260]: 2025-10-01 14:09:04.280339921 +0000 UTC m=+0.861359155 container died cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-49740ff5509dfdbc9180f8656c285ff4ecf94a755d9311db09ce05df1fcafdfd-merged.mount: Deactivated successfully.
Oct 01 14:09:04 compute-0 podman[302260]: 2025-10-01 14:09:04.351561202 +0000 UTC m=+0.932580396 container remove cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_black, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:09:04 compute-0 systemd[1]: libpod-conmon-cdeab03f2aa5f5b8dcdf34852cd583f1bc0cabaf5dd829d1b53361d37a0400a3.scope: Deactivated successfully.
Oct 01 14:09:04 compute-0 sudo[302154]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:04 compute-0 sudo[302299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:09:04 compute-0 sudo[302299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:04 compute-0 sudo[302299]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:04 compute-0 sudo[302324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:09:04 compute-0 sudo[302324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:04 compute-0 sudo[302324]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:04 compute-0 ceph-mon[74802]: pgmap v1986: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:04 compute-0 sudo[302349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:09:04 compute-0 sudo[302349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:04 compute-0 sudo[302349]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:04 compute-0 sudo[302374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:09:04 compute-0 sudo[302374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:05 compute-0 podman[302441]: 2025-10-01 14:09:05.092466573 +0000 UTC m=+0.037134579 container create 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 14:09:05 compute-0 systemd[1]: Started libpod-conmon-201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65.scope.
Oct 01 14:09:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:09:05 compute-0 podman[302441]: 2025-10-01 14:09:05.076524927 +0000 UTC m=+0.021192953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:09:05 compute-0 podman[302441]: 2025-10-01 14:09:05.182970946 +0000 UTC m=+0.127639022 container init 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:09:05 compute-0 podman[302441]: 2025-10-01 14:09:05.189210595 +0000 UTC m=+0.133878611 container start 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:09:05 compute-0 podman[302441]: 2025-10-01 14:09:05.192790698 +0000 UTC m=+0.137458744 container attach 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:09:05 compute-0 great_blackwell[302457]: 167 167
Oct 01 14:09:05 compute-0 systemd[1]: libpod-201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65.scope: Deactivated successfully.
Oct 01 14:09:05 compute-0 podman[302441]: 2025-10-01 14:09:05.195009089 +0000 UTC m=+0.139677145 container died 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 14:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5abb64f56e6365c10dc0af0db4be52a64c0406fb538610e84a67b73c5ad8ea95-merged.mount: Deactivated successfully.
Oct 01 14:09:05 compute-0 podman[302441]: 2025-10-01 14:09:05.252480533 +0000 UTC m=+0.197148539 container remove 201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 14:09:05 compute-0 systemd[1]: libpod-conmon-201e4e16aab4f4cbcc650fd9c9185bd55406370cc0c5822b37f9fd91e1f3fd65.scope: Deactivated successfully.
Oct 01 14:09:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:05 compute-0 podman[302483]: 2025-10-01 14:09:05.505472345 +0000 UTC m=+0.065442429 container create 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 14:09:05 compute-0 systemd[1]: Started libpod-conmon-0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e.scope.
Oct 01 14:09:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:09:05 compute-0 podman[302483]: 2025-10-01 14:09:05.482846106 +0000 UTC m=+0.042816200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:09:05 compute-0 ceph-mon[74802]: pgmap v1987: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:05 compute-0 podman[302483]: 2025-10-01 14:09:05.658181762 +0000 UTC m=+0.218151886 container init 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 14:09:05 compute-0 podman[302483]: 2025-10-01 14:09:05.664830123 +0000 UTC m=+0.224800207 container start 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:09:05 compute-0 podman[302483]: 2025-10-01 14:09:05.717220826 +0000 UTC m=+0.277190890 container attach 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 14:09:05 compute-0 podman[302502]: 2025-10-01 14:09:05.763470145 +0000 UTC m=+0.189872528 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Oct 01 14:09:05 compute-0 podman[302505]: 2025-10-01 14:09:05.790487082 +0000 UTC m=+0.218542038 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:09:05 compute-0 podman[302503]: 2025-10-01 14:09:05.819446822 +0000 UTC m=+0.246229188 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 01 14:09:05 compute-0 podman[302553]: 2025-10-01 14:09:05.865435821 +0000 UTC m=+0.076716565 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]: {
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "osd_id": 0,
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "type": "bluestore"
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:     },
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "osd_id": 2,
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "type": "bluestore"
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:     },
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "osd_id": 1,
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:         "type": "bluestore"
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]:     }
Oct 01 14:09:06 compute-0 peaceful_mclaren[302500]: }
Oct 01 14:09:06 compute-0 systemd[1]: libpod-0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e.scope: Deactivated successfully.
Oct 01 14:09:06 compute-0 systemd[1]: libpod-0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e.scope: Consumed 1.115s CPU time.
Oct 01 14:09:06 compute-0 podman[302483]: 2025-10-01 14:09:06.775207053 +0000 UTC m=+1.335177147 container died 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 14:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2bb3d784458284b1511d9a26ae2aa548b74867229130eb8cb271b760ff8ffa9-merged.mount: Deactivated successfully.
Oct 01 14:09:06 compute-0 podman[302483]: 2025-10-01 14:09:06.844790942 +0000 UTC m=+1.404761006 container remove 0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:09:06 compute-0 systemd[1]: libpod-conmon-0cdaaea1a154d4efa8eed77d66614d5c960de69818ad0eb06e957ff57c80b29e.scope: Deactivated successfully.
Oct 01 14:09:06 compute-0 sudo[302374]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:09:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:09:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:09:06 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:09:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7c9017b2-66d3-4653-9531-2ce1cfdbd7fa does not exist
Oct 01 14:09:06 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f44a7d01-8215-404b-aa42-16f9a097890a does not exist
Oct 01 14:09:06 compute-0 sudo[302626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:09:06 compute-0 sudo[302626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:06 compute-0 sudo[302626]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:07 compute-0 sudo[302651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:09:07 compute-0 sudo[302651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:09:07 compute-0 sudo[302651]: pam_unix(sudo:session): session closed for user root
Oct 01 14:09:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:09:07 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:09:07 compute-0 ceph-mon[74802]: pgmap v1988: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:10 compute-0 ceph-mon[74802]: pgmap v1989: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:09:12.335 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:09:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:09:12.336 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:09:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:09:12.336 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:09:12 compute-0 ceph-mon[74802]: pgmap v1990: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:14 compute-0 ceph-mon[74802]: pgmap v1991: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:16 compute-0 ceph-mon[74802]: pgmap v1992: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:17 compute-0 ceph-mon[74802]: pgmap v1993: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:09:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:09:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:18 compute-0 nova_compute[260022]: 2025-10-01 14:09:18.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:20 compute-0 ceph-mon[74802]: pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:22 compute-0 ceph-mon[74802]: pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:23 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 14:09:23 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 14:09:23 compute-0 ceph-mon[74802]: pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:25 compute-0 nova_compute[260022]: 2025-10-01 14:09:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:25 compute-0 nova_compute[260022]: 2025-10-01 14:09:25.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:09:25 compute-0 nova_compute[260022]: 2025-10-01 14:09:25.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:09:25 compute-0 nova_compute[260022]: 2025-10-01 14:09:25.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:09:25 compute-0 nova_compute[260022]: 2025-10-01 14:09:25.368 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:09:25 compute-0 nova_compute[260022]: 2025-10-01 14:09:25.368 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:09:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:25 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:09:25 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187336416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:09:25 compute-0 nova_compute[260022]: 2025-10-01 14:09:25.831 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.082 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.084 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.084 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.085 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.171 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.187 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.188 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.188 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.242 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:09:26 compute-0 ceph-mon[74802]: pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:26 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3187336416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:09:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:09:26 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831139878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.731 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.739 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.757 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.761 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:09:26 compute-0 nova_compute[260022]: 2025-10-01 14:09:26.761 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:09:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:27 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2831139878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:09:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:28 compute-0 ceph-mon[74802]: pgmap v1998: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:28 compute-0 nova_compute[260022]: 2025-10-01 14:09:28.763 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:28 compute-0 nova_compute[260022]: 2025-10-01 14:09:28.763 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:28 compute-0 nova_compute[260022]: 2025-10-01 14:09:28.763 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:09:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:30 compute-0 nova_compute[260022]: 2025-10-01 14:09:30.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:30 compute-0 ceph-mon[74802]: pgmap v1999: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:32 compute-0 ceph-mon[74802]: pgmap v2000: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:34 compute-0 nova_compute[260022]: 2025-10-01 14:09:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:34 compute-0 ceph-mon[74802]: pgmap v2001: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:35 compute-0 ceph-mon[74802]: pgmap v2002: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:36 compute-0 nova_compute[260022]: 2025-10-01 14:09:36.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:36 compute-0 nova_compute[260022]: 2025-10-01 14:09:36.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:09:36 compute-0 nova_compute[260022]: 2025-10-01 14:09:36.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:09:36 compute-0 nova_compute[260022]: 2025-10-01 14:09:36.365 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:09:36 compute-0 nova_compute[260022]: 2025-10-01 14:09:36.365 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:36 compute-0 podman[302723]: 2025-10-01 14:09:36.580529594 +0000 UTC m=+0.117876194 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid)
Oct 01 14:09:36 compute-0 podman[302722]: 2025-10-01 14:09:36.593839016 +0000 UTC m=+0.131163705 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 14:09:36 compute-0 podman[302724]: 2025-10-01 14:09:36.594246669 +0000 UTC m=+0.119444433 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:09:36 compute-0 podman[302721]: 2025-10-01 14:09:36.599404973 +0000 UTC m=+0.137509247 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 14:09:37 compute-0 nova_compute[260022]: 2025-10-01 14:09:37.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:38 compute-0 ceph-mon[74802]: pgmap v2003: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:40 compute-0 ceph-mon[74802]: pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:42 compute-0 ceph-mon[74802]: pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:44 compute-0 ceph-mon[74802]: pgmap v2006: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:46 compute-0 ceph-mon[74802]: pgmap v2007: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:47 compute-0 ceph-mon[74802]: pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:09:47
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Oct 01 14:09:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:09:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:09:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:09:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:50 compute-0 ceph-mon[74802]: pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:52 compute-0 ceph-mon[74802]: pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:53 compute-0 nova_compute[260022]: 2025-10-01 14:09:53.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:09:53 compute-0 nova_compute[260022]: 2025-10-01 14:09:53.346 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:09:53 compute-0 nova_compute[260022]: 2025-10-01 14:09:53.347 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:09:53 compute-0 nova_compute[260022]: 2025-10-01 14:09:53.347 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:09:53 compute-0 nova_compute[260022]: 2025-10-01 14:09:53.348 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:09:53 compute-0 nova_compute[260022]: 2025-10-01 14:09:53.348 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:09:53 compute-0 nova_compute[260022]: 2025-10-01 14:09:53.349 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:09:53 compute-0 nova_compute[260022]: 2025-10-01 14:09:53.365 2 DEBUG nova.virt.libvirt.imagecache [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Skipping verification, no base directory at /var/lib/nova/instances/_base _get_base /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:367
Oct 01 14:09:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:54 compute-0 ceph-mon[74802]: pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:09:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643843576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:09:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:09:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643843576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:09:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3643843576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:09:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3643843576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:09:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 7978 writes, 29K keys, 7978 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7978 writes, 1972 syncs, 4.05 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 251 writes, 437 keys, 251 commit groups, 1.0 writes per commit group, ingest: 0.18 MB, 0.00 MB/s
                                           Interval WAL: 251 writes, 121 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:09:56 compute-0 ceph-mon[74802]: pgmap v2012: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:57 compute-0 ceph-mon[74802]: pgmap v2013: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:09:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:09:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:09:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:00 compute-0 ceph-mon[74802]: pgmap v2014: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:10:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 9426 writes, 34K keys, 9426 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9426 writes, 2411 syncs, 3.91 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 270 writes, 502 keys, 270 commit groups, 1.0 writes per commit group, ingest: 0.19 MB, 0.00 MB/s
                                           Interval WAL: 270 writes, 127 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:10:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:02 compute-0 ceph-mon[74802]: pgmap v2015: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:04 compute-0 ceph-mon[74802]: pgmap v2016: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:10:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 8417 writes, 30K keys, 8417 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8417 writes, 2145 syncs, 3.92 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 249 writes, 454 keys, 249 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s
                                           Interval WAL: 249 writes, 117 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:10:06 compute-0 ceph-mon[74802]: pgmap v2017: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:07 compute-0 sudo[302802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:07 compute-0 sudo[302802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:07 compute-0 sudo[302802]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:07 compute-0 podman[302827]: 2025-10-01 14:10:07.2950183 +0000 UTC m=+0.088874982 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 14:10:07 compute-0 podman[302828]: 2025-10-01 14:10:07.295145984 +0000 UTC m=+0.084079460 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:10:07 compute-0 sudo[302858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:10:07 compute-0 sudo[302858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:07 compute-0 sudo[302858]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:07 compute-0 podman[302829]: 2025-10-01 14:10:07.316769321 +0000 UTC m=+0.092653393 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 01 14:10:07 compute-0 podman[302826]: 2025-10-01 14:10:07.330581259 +0000 UTC m=+0.127181019 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:10:07 compute-0 sudo[302925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:07 compute-0 sudo[302925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:07 compute-0 sudo[302925]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:07 compute-0 sudo[302955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:10:07 compute-0 sudo[302955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 0 op/s
Oct 01 14:10:07 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct 01 14:10:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:08 compute-0 sudo[302955]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:08 compute-0 sudo[303011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:08 compute-0 sudo[303011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:08 compute-0 sudo[303011]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:08 compute-0 sudo[303036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:10:08 compute-0 sudo[303036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:08 compute-0 sudo[303036]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:08 compute-0 sudo[303061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:08 compute-0 sudo[303061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:08 compute-0 sudo[303061]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:08 compute-0 sudo[303086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 01 14:10:08 compute-0 sudo[303086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:08 compute-0 ceph-mon[74802]: pgmap v2018: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 0 op/s
Oct 01 14:10:08 compute-0 sudo[303086]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 839d9d56-2f06-475c-903d-b4cc0fe280b4 does not exist
Oct 01 14:10:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 7eead415-c00a-466e-8282-fa052eb3f117 does not exist
Oct 01 14:10:08 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5018edee-6188-402b-bedc-02431d6e186e does not exist
Oct 01 14:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:10:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:10:08 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:10:08 compute-0 sudo[303130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:08 compute-0 sudo[303130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:08 compute-0 sudo[303130]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:08 compute-0 sudo[303155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:10:08 compute-0 sudo[303155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:08 compute-0 sudo[303155]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:08 compute-0 sudo[303180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:08 compute-0 sudo[303180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:08 compute-0 sudo[303180]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:09 compute-0 sudo[303205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:10:09 compute-0 sudo[303205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:09 compute-0 podman[303271]: 2025-10-01 14:10:09.478380323 +0000 UTC m=+0.059435288 container create e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct 01 14:10:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 01 14:10:09 compute-0 systemd[1]: Started libpod-conmon-e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08.scope.
Oct 01 14:10:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:10:09 compute-0 podman[303271]: 2025-10-01 14:10:09.458273654 +0000 UTC m=+0.039328669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:10:09 compute-0 podman[303271]: 2025-10-01 14:10:09.568099171 +0000 UTC m=+0.149154236 container init e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct 01 14:10:09 compute-0 podman[303271]: 2025-10-01 14:10:09.579340528 +0000 UTC m=+0.160395523 container start e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:10:09 compute-0 podman[303271]: 2025-10-01 14:10:09.583680545 +0000 UTC m=+0.164735540 container attach e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:10:09 compute-0 kind_wozniak[303287]: 167 167
Oct 01 14:10:09 compute-0 systemd[1]: libpod-e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08.scope: Deactivated successfully.
Oct 01 14:10:09 compute-0 podman[303271]: 2025-10-01 14:10:09.590391939 +0000 UTC m=+0.171446964 container died e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 01 14:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bc136d1bb90e4739a013513014a0a7b9fa84b274008dbbaf6edce3524b1260c-merged.mount: Deactivated successfully.
Oct 01 14:10:09 compute-0 podman[303271]: 2025-10-01 14:10:09.638278389 +0000 UTC m=+0.219333384 container remove e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 14:10:09 compute-0 systemd[1]: libpod-conmon-e4c026aabab10216ed0b3b924f0670e20c154b0eb45d0187e9e41bde50f9fa08.scope: Deactivated successfully.
Oct 01 14:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:10:09 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:10:09 compute-0 ceph-mon[74802]: pgmap v2019: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 01 14:10:09 compute-0 podman[303311]: 2025-10-01 14:10:09.892228951 +0000 UTC m=+0.076785149 container create a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 14:10:09 compute-0 systemd[1]: Started libpod-conmon-a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520.scope.
Oct 01 14:10:09 compute-0 podman[303311]: 2025-10-01 14:10:09.864699987 +0000 UTC m=+0.049256265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:10:09 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:10 compute-0 podman[303311]: 2025-10-01 14:10:10.001994606 +0000 UTC m=+0.186550824 container init a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:10:10 compute-0 podman[303311]: 2025-10-01 14:10:10.016437784 +0000 UTC m=+0.200993982 container start a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 14:10:10 compute-0 podman[303311]: 2025-10-01 14:10:10.020298746 +0000 UTC m=+0.204854944 container attach a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 01 14:10:11 compute-0 interesting_edison[303328]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:10:11 compute-0 interesting_edison[303328]: --> relative data size: 1.0
Oct 01 14:10:11 compute-0 interesting_edison[303328]: --> All data devices are unavailable
Oct 01 14:10:11 compute-0 systemd[1]: libpod-a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520.scope: Deactivated successfully.
Oct 01 14:10:11 compute-0 systemd[1]: libpod-a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520.scope: Consumed 1.104s CPU time.
Oct 01 14:10:11 compute-0 podman[303311]: 2025-10-01 14:10:11.164673326 +0000 UTC m=+1.349229534 container died a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fb2add64e92e59d1fc73f3c1b9232f4c79ffbce412327f4bf3bb09727459cc6-merged.mount: Deactivated successfully.
Oct 01 14:10:11 compute-0 podman[303311]: 2025-10-01 14:10:11.233963706 +0000 UTC m=+1.418519944 container remove a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:10:11 compute-0 systemd[1]: libpod-conmon-a29a1cf3659d6af28f1b79784d2174b5a4cdf34cb8b8f68da5b0efd663640520.scope: Deactivated successfully.
Oct 01 14:10:11 compute-0 sudo[303205]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:11 compute-0 sudo[303369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:11 compute-0 sudo[303369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:11 compute-0 sudo[303369]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:11 compute-0 sudo[303394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:10:11 compute-0 sudo[303394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:11 compute-0 sudo[303394]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 01 14:10:11 compute-0 sudo[303419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:11 compute-0 sudo[303419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:11 compute-0 sudo[303419]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:11 compute-0 sudo[303444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:10:11 compute-0 sudo[303444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:12 compute-0 podman[303511]: 2025-10-01 14:10:12.106650489 +0000 UTC m=+0.060229842 container create 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:10:12 compute-0 systemd[1]: Started libpod-conmon-7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce.scope.
Oct 01 14:10:12 compute-0 podman[303511]: 2025-10-01 14:10:12.077270657 +0000 UTC m=+0.030850080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:10:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:10:12 compute-0 podman[303511]: 2025-10-01 14:10:12.207321215 +0000 UTC m=+0.160900558 container init 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:10:12 compute-0 podman[303511]: 2025-10-01 14:10:12.219980488 +0000 UTC m=+0.173559811 container start 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 14:10:12 compute-0 podman[303511]: 2025-10-01 14:10:12.224529292 +0000 UTC m=+0.178108725 container attach 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:10:12 compute-0 gallant_maxwell[303527]: 167 167
Oct 01 14:10:12 compute-0 systemd[1]: libpod-7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce.scope: Deactivated successfully.
Oct 01 14:10:12 compute-0 conmon[303527]: conmon 7d9ca906fd50bc299039 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce.scope/container/memory.events
Oct 01 14:10:12 compute-0 podman[303511]: 2025-10-01 14:10:12.228583771 +0000 UTC m=+0.182163144 container died 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d87d50817063913dc50296ee2b6c53b07e516183e743e7b13606ff01f4d7cae9-merged.mount: Deactivated successfully.
Oct 01 14:10:12 compute-0 podman[303511]: 2025-10-01 14:10:12.288150501 +0000 UTC m=+0.241729824 container remove 7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 14:10:12 compute-0 systemd[1]: libpod-conmon-7d9ca906fd50bc29903940cf2f6d6d28d649a28285ecfedab3d77e166e195dce.scope: Deactivated successfully.
Oct 01 14:10:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:10:12.335 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:10:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:10:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:10:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:10:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:10:12 compute-0 podman[303553]: 2025-10-01 14:10:12.492568811 +0000 UTC m=+0.058182268 container create fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:10:12 compute-0 ceph-mon[74802]: pgmap v2020: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 01 14:10:12 compute-0 systemd[1]: Started libpod-conmon-fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320.scope.
Oct 01 14:10:12 compute-0 podman[303553]: 2025-10-01 14:10:12.473774254 +0000 UTC m=+0.039387731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:10:12 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:12 compute-0 podman[303553]: 2025-10-01 14:10:12.608843492 +0000 UTC m=+0.174456939 container init fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 14:10:12 compute-0 podman[303553]: 2025-10-01 14:10:12.617308951 +0000 UTC m=+0.182922398 container start fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:10:12 compute-0 podman[303553]: 2025-10-01 14:10:12.62043245 +0000 UTC m=+0.186045897 container attach fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:10:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]: {
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:     "0": [
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:         {
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "devices": [
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "/dev/loop3"
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             ],
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_name": "ceph_lv0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_size": "21470642176",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "name": "ceph_lv0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "tags": {
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.cluster_name": "ceph",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.crush_device_class": "",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.encrypted": "0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.osd_id": "0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.type": "block",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.vdo": "0"
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             },
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "type": "block",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "vg_name": "ceph_vg0"
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:         }
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:     ],
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:     "1": [
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:         {
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "devices": [
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "/dev/loop4"
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             ],
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_name": "ceph_lv1",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_size": "21470642176",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "name": "ceph_lv1",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "tags": {
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.cluster_name": "ceph",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.crush_device_class": "",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.encrypted": "0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.osd_id": "1",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.type": "block",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.vdo": "0"
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             },
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "type": "block",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "vg_name": "ceph_vg1"
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:         }
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:     ],
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:     "2": [
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:         {
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "devices": [
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "/dev/loop5"
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             ],
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_name": "ceph_lv2",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_size": "21470642176",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "name": "ceph_lv2",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "tags": {
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.cluster_name": "ceph",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.crush_device_class": "",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.encrypted": "0",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.osd_id": "2",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.type": "block",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:                 "ceph.vdo": "0"
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             },
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "type": "block",
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:             "vg_name": "ceph_vg2"
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:         }
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]:     ]
Oct 01 14:10:13 compute-0 adoring_sanderson[303570]: }
Oct 01 14:10:13 compute-0 systemd[1]: libpod-fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320.scope: Deactivated successfully.
Oct 01 14:10:13 compute-0 podman[303579]: 2025-10-01 14:10:13.395928299 +0000 UTC m=+0.023147976 container died fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-243cdddc9671ada73569b425d299a30d1c8fce3ab9f0ffb3ef8d7e8da04a3fe2-merged.mount: Deactivated successfully.
Oct 01 14:10:13 compute-0 podman[303579]: 2025-10-01 14:10:13.441852147 +0000 UTC m=+0.069071754 container remove fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 01 14:10:13 compute-0 systemd[1]: libpod-conmon-fb56f2109bdf3543ef91c7e6c1bc487eee6efa69077c78089f0c48af7999a320.scope: Deactivated successfully.
Oct 01 14:10:13 compute-0 sudo[303444]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 01 14:10:13 compute-0 sudo[303594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:13 compute-0 sudo[303594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:13 compute-0 sudo[303594]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:13 compute-0 sudo[303619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:10:13 compute-0 sudo[303619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:13 compute-0 sudo[303619]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:13 compute-0 sudo[303644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:13 compute-0 sudo[303644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:13 compute-0 sudo[303644]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:13 compute-0 sudo[303669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:10:13 compute-0 sudo[303669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:14 compute-0 podman[303734]: 2025-10-01 14:10:14.203094433 +0000 UTC m=+0.070775127 container create 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:10:14 compute-0 systemd[1]: Started libpod-conmon-8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e.scope.
Oct 01 14:10:14 compute-0 podman[303734]: 2025-10-01 14:10:14.176462218 +0000 UTC m=+0.044142982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:10:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:10:14 compute-0 podman[303734]: 2025-10-01 14:10:14.312238978 +0000 UTC m=+0.179919682 container init 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:10:14 compute-0 podman[303734]: 2025-10-01 14:10:14.323547907 +0000 UTC m=+0.191228601 container start 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:10:14 compute-0 podman[303734]: 2025-10-01 14:10:14.327844974 +0000 UTC m=+0.195525748 container attach 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:10:14 compute-0 zen_wright[303751]: 167 167
Oct 01 14:10:14 compute-0 systemd[1]: libpod-8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e.scope: Deactivated successfully.
Oct 01 14:10:14 compute-0 podman[303734]: 2025-10-01 14:10:14.330415875 +0000 UTC m=+0.198096569 container died 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-355a0af31372412ca18cc4f7a974ba9d02d92ce71b028a1130ea0f07ec8b7c75-merged.mount: Deactivated successfully.
Oct 01 14:10:14 compute-0 podman[303734]: 2025-10-01 14:10:14.387425814 +0000 UTC m=+0.255106518 container remove 8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 14:10:14 compute-0 systemd[1]: libpod-conmon-8a11d5ca421b821e7d6c2cb308e0147343b5753bc64ae0b848d461e7cf3d3a0e.scope: Deactivated successfully.
Oct 01 14:10:14 compute-0 ceph-mon[74802]: pgmap v2021: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 01 14:10:14 compute-0 podman[303775]: 2025-10-01 14:10:14.645166447 +0000 UTC m=+0.066650467 container create 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:10:14 compute-0 systemd[1]: Started libpod-conmon-2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310.scope.
Oct 01 14:10:14 compute-0 podman[303775]: 2025-10-01 14:10:14.62445558 +0000 UTC m=+0.045939580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:10:14 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:10:14 compute-0 podman[303775]: 2025-10-01 14:10:14.754210258 +0000 UTC m=+0.175694328 container init 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:10:14 compute-0 podman[303775]: 2025-10-01 14:10:14.768941266 +0000 UTC m=+0.190425276 container start 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 14:10:14 compute-0 podman[303775]: 2025-10-01 14:10:14.772626033 +0000 UTC m=+0.194110073 container attach 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 14:10:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 01 14:10:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct 01 14:10:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct 01 14:10:15 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct 01 14:10:15 compute-0 strange_williams[303792]: {
Oct 01 14:10:15 compute-0 strange_williams[303792]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "osd_id": 0,
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "type": "bluestore"
Oct 01 14:10:15 compute-0 strange_williams[303792]:     },
Oct 01 14:10:15 compute-0 strange_williams[303792]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "osd_id": 2,
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "type": "bluestore"
Oct 01 14:10:15 compute-0 strange_williams[303792]:     },
Oct 01 14:10:15 compute-0 strange_williams[303792]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "osd_id": 1,
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:10:15 compute-0 strange_williams[303792]:         "type": "bluestore"
Oct 01 14:10:15 compute-0 strange_williams[303792]:     }
Oct 01 14:10:15 compute-0 strange_williams[303792]: }
Oct 01 14:10:15 compute-0 systemd[1]: libpod-2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310.scope: Deactivated successfully.
Oct 01 14:10:15 compute-0 podman[303775]: 2025-10-01 14:10:15.781838011 +0000 UTC m=+1.203322001 container died 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:10:15 compute-0 systemd[1]: libpod-2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310.scope: Consumed 1.013s CPU time.
Oct 01 14:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4ecaedfaa1e24bc31ceed85b565efa449fb3602e3862c790af737afbc920f7b-merged.mount: Deactivated successfully.
Oct 01 14:10:15 compute-0 podman[303775]: 2025-10-01 14:10:15.85392257 +0000 UTC m=+1.275406590 container remove 2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_williams, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:10:15 compute-0 systemd[1]: libpod-conmon-2a5e3245504c547d3435e080b1d791fe86166a9f82b936cc2c70fd3b43dda310.scope: Deactivated successfully.
Oct 01 14:10:15 compute-0 sudo[303669]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:10:15 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:10:15 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:15 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 11fc248e-18b8-4e5b-a0f4-e5d515385773 does not exist
Oct 01 14:10:15 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 4abfbe65-b35d-44eb-90ab-1dc60847512a does not exist
Oct 01 14:10:15 compute-0 sudo[303837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:10:16 compute-0 sudo[303837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:16 compute-0 sudo[303837]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:16 compute-0 sudo[303862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:10:16 compute-0 sudo[303862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:10:16 compute-0 sudo[303862]: pam_unix(sudo:session): session closed for user root
Oct 01 14:10:16 compute-0 ceph-mon[74802]: pgmap v2022: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct 01 14:10:16 compute-0 ceph-mon[74802]: osdmap e189: 3 total, 3 up, 3 in
Oct 01 14:10:16 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:16 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:10:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 25 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 461 KiB/s rd, 102 B/s wr, 8 op/s
Oct 01 14:10:17 compute-0 ceph-mon[74802]: pgmap v2024: 305 pgs: 305 active+clean; 25 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 461 KiB/s rd, 102 B/s wr, 8 op/s
Oct 01 14:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:10:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:10:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:18 compute-0 nova_compute[260022]: 2025-10-01 14:10:18.364 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct 01 14:10:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct 01 14:10:18 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct 01 14:10:19 compute-0 nova_compute[260022]: 2025-10-01 14:10:19.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 21 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 14:10:19 compute-0 ceph-mon[74802]: osdmap e190: 3 total, 3 up, 3 in
Oct 01 14:10:19 compute-0 ceph-mon[74802]: pgmap v2026: 305 pgs: 305 active+clean; 21 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 14:10:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 21 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 14:10:22 compute-0 ceph-mon[74802]: pgmap v2027: 305 pgs: 305 active+clean; 21 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 14:10:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct 01 14:10:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct 01 14:10:22 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct 01 14:10:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Oct 01 14:10:23 compute-0 ceph-mon[74802]: osdmap e191: 3 total, 3 up, 3 in
Oct 01 14:10:23 compute-0 ceph-mon[74802]: pgmap v2029: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 KiB/s wr, 63 op/s
Oct 01 14:10:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 61 op/s
Oct 01 14:10:26 compute-0 ceph-mon[74802]: pgmap v2030: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 61 op/s
Oct 01 14:10:27 compute-0 nova_compute[260022]: 2025-10-01 14:10:27.358 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:27 compute-0 nova_compute[260022]: 2025-10-01 14:10:27.384 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:10:27 compute-0 nova_compute[260022]: 2025-10-01 14:10:27.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:10:27 compute-0 nova_compute[260022]: 2025-10-01 14:10:27.385 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:10:27 compute-0 nova_compute[260022]: 2025-10-01 14:10:27.386 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:10:27 compute-0 nova_compute[260022]: 2025-10-01 14:10:27.386 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:10:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 01 14:10:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:10:27 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465569613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:10:27 compute-0 nova_compute[260022]: 2025-10-01 14:10:27.866 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:10:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct 01 14:10:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct 01 14:10:27 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.130 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.132 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4984MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.132 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.133 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.233 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.266 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.267 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.267 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.511 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:10:28 compute-0 ceph-mon[74802]: pgmap v2031: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct 01 14:10:28 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/465569613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:10:28 compute-0 ceph-mon[74802]: osdmap e192: 3 total, 3 up, 3 in
Oct 01 14:10:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:10:28 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401306405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.961 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:10:28 compute-0 nova_compute[260022]: 2025-10-01 14:10:28.969 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:10:29 compute-0 nova_compute[260022]: 2025-10-01 14:10:29.000 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:10:29 compute-0 nova_compute[260022]: 2025-10-01 14:10:29.003 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:10:29 compute-0 nova_compute[260022]: 2025-10-01 14:10:29.003 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:10:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 14:10:29 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2401306405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:10:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Oct 01 14:10:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Oct 01 14:10:30 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Oct 01 14:10:30 compute-0 ceph-mon[74802]: pgmap v2033: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct 01 14:10:30 compute-0 nova_compute[260022]: 2025-10-01 14:10:30.991 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:30 compute-0 nova_compute[260022]: 2025-10-01 14:10:30.992 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:30 compute-0 nova_compute[260022]: 2025-10-01 14:10:30.992 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:10:31 compute-0 nova_compute[260022]: 2025-10-01 14:10:31.341 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:31 compute-0 ceph-mon[74802]: osdmap e193: 3 total, 3 up, 3 in
Oct 01 14:10:31 compute-0 ceph-mon[74802]: pgmap v2035: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 14:10:34 compute-0 nova_compute[260022]: 2025-10-01 14:10:34.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:34 compute-0 ceph-mon[74802]: pgmap v2036: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 14:10:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 14:10:36 compute-0 nova_compute[260022]: 2025-10-01 14:10:36.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:36 compute-0 ceph-mon[74802]: pgmap v2037: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct 01 14:10:37 compute-0 nova_compute[260022]: 2025-10-01 14:10:37.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:37 compute-0 nova_compute[260022]: 2025-10-01 14:10:37.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:10:37 compute-0 nova_compute[260022]: 2025-10-01 14:10:37.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:10:37 compute-0 nova_compute[260022]: 2025-10-01 14:10:37.367 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:10:37 compute-0 nova_compute[260022]: 2025-10-01 14:10:37.368 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 KiB/s wr, 15 op/s
Oct 01 14:10:37 compute-0 podman[303934]: 2025-10-01 14:10:37.553221305 +0000 UTC m=+0.089288166 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923)
Oct 01 14:10:37 compute-0 podman[303932]: 2025-10-01 14:10:37.561321952 +0000 UTC m=+0.100605675 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 01 14:10:37 compute-0 podman[303933]: 2025-10-01 14:10:37.571252987 +0000 UTC m=+0.116304353 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:10:37 compute-0 podman[303931]: 2025-10-01 14:10:37.601421014 +0000 UTC m=+0.145724887 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20250923)
Oct 01 14:10:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:38 compute-0 ceph-mon[74802]: pgmap v2038: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 KiB/s wr, 15 op/s
Oct 01 14:10:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 14:10:40 compute-0 ceph-mon[74802]: pgmap v2039: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct 01 14:10:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct 01 14:10:41 compute-0 ceph-mon[74802]: pgmap v2040: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct 01 14:10:42 compute-0 nova_compute[260022]: 2025-10-01 14:10:42.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:42 compute-0 nova_compute[260022]: 2025-10-01 14:10:42.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 14:10:42 compute-0 nova_compute[260022]: 2025-10-01 14:10:42.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 14:10:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 01 14:10:44 compute-0 ceph-mon[74802]: pgmap v2041: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct 01 14:10:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:46 compute-0 ceph-mon[74802]: pgmap v2042: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:10:47
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', '.mgr', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.meta']
Oct 01 14:10:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:10:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:10:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:10:48 compute-0 ceph-mon[74802]: pgmap v2043: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:50 compute-0 ceph-mon[74802]: pgmap v2044: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:51 compute-0 nova_compute[260022]: 2025-10-01 14:10:51.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:51 compute-0 nova_compute[260022]: 2025-10-01 14:10:51.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 14:10:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:51 compute-0 ceph-mon[74802]: pgmap v2045: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:54 compute-0 ceph-mon[74802]: pgmap v2046: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:10:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2132725017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:10:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:10:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2132725017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:10:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2132725017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:10:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2132725017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:10:56 compute-0 ceph-mon[74802]: pgmap v2047: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:57 compute-0 nova_compute[260022]: 2025-10-01 14:10:57.360 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:10:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:10:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:10:58 compute-0 ceph-mon[74802]: pgmap v2048: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:10:59 compute-0 ceph-mon[74802]: pgmap v2049: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:02 compute-0 ceph-mon[74802]: pgmap v2050: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:04 compute-0 ceph-mon[74802]: pgmap v2051: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:06 compute-0 ceph-mon[74802]: pgmap v2052: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Oct 01 14:11:07 compute-0 ceph-mon[74802]: pgmap v2053: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Oct 01 14:11:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:08 compute-0 podman[304017]: 2025-10-01 14:11:08.561065387 +0000 UTC m=+0.100975477 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct 01 14:11:08 compute-0 podman[304019]: 2025-10-01 14:11:08.567518831 +0000 UTC m=+0.092489967 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 01 14:11:08 compute-0 podman[304018]: 2025-10-01 14:11:08.575347881 +0000 UTC m=+0.110887452 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:11:08 compute-0 podman[304016]: 2025-10-01 14:11:08.59235373 +0000 UTC m=+0.137745184 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Oct 01 14:11:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 01 14:11:10 compute-0 ceph-mon[74802]: pgmap v2054: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 01 14:11:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 01 14:11:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:11:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:11:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:11:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:11:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:11:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:11:12 compute-0 ceph-mon[74802]: pgmap v2055: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct 01 14:11:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 01 14:11:13 compute-0 ceph-mon[74802]: pgmap v2056: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 01 14:11:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 01 14:11:16 compute-0 sudo[304100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:16 compute-0 sudo[304100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:16 compute-0 sudo[304100]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:16 compute-0 sudo[304125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:11:16 compute-0 sudo[304125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:16 compute-0 sudo[304125]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:16 compute-0 sudo[304150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:16 compute-0 sudo[304150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:16 compute-0 sudo[304150]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:16 compute-0 sudo[304175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:11:16 compute-0 sudo[304175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:16 compute-0 ceph-mon[74802]: pgmap v2057: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 01 14:11:17 compute-0 sudo[304175]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:17 compute-0 sudo[304233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:17 compute-0 sudo[304233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:17 compute-0 sudo[304233]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:17 compute-0 sudo[304258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:11:17 compute-0 sudo[304258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:17 compute-0 sudo[304258]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:17 compute-0 sudo[304283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:17 compute-0 sudo[304283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:17 compute-0 sudo[304283]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 01 14:11:17 compute-0 sudo[304308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- inventory --format=json-pretty --filter-for-batch
Oct 01 14:11:17 compute-0 sudo[304308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:11:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:11:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:17 compute-0 podman[304372]: 2025-10-01 14:11:17.978523122 +0000 UTC m=+0.067133373 container create 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 14:11:18 compute-0 systemd[1]: Started libpod-conmon-95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb.scope.
Oct 01 14:11:18 compute-0 podman[304372]: 2025-10-01 14:11:17.952193225 +0000 UTC m=+0.040803526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:11:18 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:11:18 compute-0 podman[304372]: 2025-10-01 14:11:18.079237259 +0000 UTC m=+0.167847540 container init 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:11:18 compute-0 podman[304372]: 2025-10-01 14:11:18.087504581 +0000 UTC m=+0.176114802 container start 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 14:11:18 compute-0 podman[304372]: 2025-10-01 14:11:18.090722714 +0000 UTC m=+0.179332965 container attach 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:11:18 compute-0 nostalgic_brown[304388]: 167 167
Oct 01 14:11:18 compute-0 systemd[1]: libpod-95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb.scope: Deactivated successfully.
Oct 01 14:11:18 compute-0 podman[304372]: 2025-10-01 14:11:18.092717707 +0000 UTC m=+0.181327958 container died 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 14:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-98f15441bdc1ae3e250b0ce5aa56404dfff5becac8024496d0dd08cda7605e95-merged.mount: Deactivated successfully.
Oct 01 14:11:18 compute-0 podman[304372]: 2025-10-01 14:11:18.138962354 +0000 UTC m=+0.227572605 container remove 95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 14:11:18 compute-0 systemd[1]: libpod-conmon-95619fed2f00ac0302b526b2e762e5c37d3ba46968514294d8e80166b74aa6bb.scope: Deactivated successfully.
Oct 01 14:11:18 compute-0 podman[304412]: 2025-10-01 14:11:18.343913291 +0000 UTC m=+0.057252658 container create 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 14:11:18 compute-0 systemd[1]: Started libpod-conmon-69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20.scope.
Oct 01 14:11:18 compute-0 podman[304412]: 2025-10-01 14:11:18.323025358 +0000 UTC m=+0.036364705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:11:18 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:18 compute-0 podman[304412]: 2025-10-01 14:11:18.438663098 +0000 UTC m=+0.152002505 container init 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:11:18 compute-0 podman[304412]: 2025-10-01 14:11:18.454159111 +0000 UTC m=+0.167498468 container start 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:11:18 compute-0 podman[304412]: 2025-10-01 14:11:18.458611152 +0000 UTC m=+0.171950469 container attach 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:11:18 compute-0 ceph-mon[74802]: pgmap v2058: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct 01 14:11:19 compute-0 nova_compute[260022]: 2025-10-01 14:11:19.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Oct 01 14:11:19 compute-0 ceph-mon[74802]: pgmap v2059: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Oct 01 14:11:20 compute-0 vigilant_benz[304428]: [
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:     {
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         "available": false,
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         "ceph_device": false,
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         "lsm_data": {},
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         "lvs": [],
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         "path": "/dev/sr0",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         "rejected_reasons": [
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "Has a FileSystem",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "Insufficient space (<5GB)"
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         ],
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         "sys_api": {
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "actuators": null,
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "device_nodes": "sr0",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "devname": "sr0",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "human_readable_size": "482.00 KB",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "id_bus": "ata",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "model": "QEMU DVD-ROM",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "nr_requests": "2",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "parent": "/dev/sr0",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "partitions": {},
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "path": "/dev/sr0",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "removable": "1",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "rev": "2.5+",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "ro": "0",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "rotational": "0",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "sas_address": "",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "sas_device_handle": "",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "scheduler_mode": "mq-deadline",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "sectors": 0,
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "sectorsize": "2048",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "size": 493568.0,
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "support_discard": "2048",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "type": "disk",
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:             "vendor": "QEMU"
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:         }
Oct 01 14:11:20 compute-0 vigilant_benz[304428]:     }
Oct 01 14:11:20 compute-0 vigilant_benz[304428]: ]
Oct 01 14:11:20 compute-0 systemd[1]: libpod-69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20.scope: Deactivated successfully.
Oct 01 14:11:20 compute-0 systemd[1]: libpod-69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20.scope: Consumed 1.746s CPU time.
Oct 01 14:11:20 compute-0 podman[304412]: 2025-10-01 14:11:20.115880853 +0000 UTC m=+1.829220210 container died 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-19d781eb3d03121d08d8be265f663442ef1eae2cb54423790e55f8ae5f2becbd-merged.mount: Deactivated successfully.
Oct 01 14:11:20 compute-0 podman[304412]: 2025-10-01 14:11:20.186682791 +0000 UTC m=+1.900022138 container remove 69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct 01 14:11:20 compute-0 systemd[1]: libpod-conmon-69ff6403601a70600eadd0a8a7834b3571faf0158af8db87e60a5f152bf03f20.scope: Deactivated successfully.
Oct 01 14:11:20 compute-0 sudo[304308]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:11:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:11:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:11:20 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:11:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:11:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:20 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 30316e91-8959-4118-bf12-031c2b442ee7 does not exist
Oct 01 14:11:20 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev d8f3a92a-b7df-4812-8066-bb97db634bbe does not exist
Oct 01 14:11:20 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 74161024-75cb-45bd-b96a-be7e8afb5228 does not exist
Oct 01 14:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:11:20 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:11:20 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:11:20 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:11:20 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:11:20 compute-0 sudo[306650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:20 compute-0 sudo[306650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:20 compute-0 sudo[306650]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:20 compute-0 sudo[306675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:11:20 compute-0 sudo[306675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:20 compute-0 sudo[306675]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:20 compute-0 sudo[306700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:20 compute-0 sudo[306700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:20 compute-0 sudo[306700]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:20 compute-0 sudo[306725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:11:20 compute-0 sudo[306725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:21 compute-0 podman[306792]: 2025-10-01 14:11:21.112883184 +0000 UTC m=+0.061535614 container create 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 14:11:21 compute-0 systemd[1]: Started libpod-conmon-8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6.scope.
Oct 01 14:11:21 compute-0 podman[306792]: 2025-10-01 14:11:21.089922785 +0000 UTC m=+0.038575255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:11:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:11:21 compute-0 podman[306792]: 2025-10-01 14:11:21.214339875 +0000 UTC m=+0.162992335 container init 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:11:21 compute-0 podman[306792]: 2025-10-01 14:11:21.229425334 +0000 UTC m=+0.178077724 container start 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 14:11:21 compute-0 podman[306792]: 2025-10-01 14:11:21.234378861 +0000 UTC m=+0.183031301 container attach 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:11:21 compute-0 amazing_ptolemy[306808]: 167 167
Oct 01 14:11:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:21 compute-0 systemd[1]: libpod-8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6.scope: Deactivated successfully.
Oct 01 14:11:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:11:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:11:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:11:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:11:21 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:11:21 compute-0 podman[306792]: 2025-10-01 14:11:21.239569166 +0000 UTC m=+0.188221556 container died 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e902bce89963ff0edb57332d4a27675cb1cc564ae5a9aa9adcaf0e7eb29bdbf3-merged.mount: Deactivated successfully.
Oct 01 14:11:21 compute-0 podman[306792]: 2025-10-01 14:11:21.290994459 +0000 UTC m=+0.239646849 container remove 8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 14:11:21 compute-0 systemd[1]: libpod-conmon-8e335a495be0fd096977ca56b9bcca0c4ddbc88b9980e009da03a756ad1bccf6.scope: Deactivated successfully.
Oct 01 14:11:21 compute-0 podman[306831]: 2025-10-01 14:11:21.509688791 +0000 UTC m=+0.066329567 container create f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:11:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 15 op/s
Oct 01 14:11:21 compute-0 podman[306831]: 2025-10-01 14:11:21.477537411 +0000 UTC m=+0.034178237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:11:21 compute-0 systemd[1]: Started libpod-conmon-f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a.scope.
Oct 01 14:11:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:21 compute-0 podman[306831]: 2025-10-01 14:11:21.621931724 +0000 UTC m=+0.178572490 container init f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:11:21 compute-0 podman[306831]: 2025-10-01 14:11:21.639498092 +0000 UTC m=+0.196138868 container start f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:11:21 compute-0 podman[306831]: 2025-10-01 14:11:21.643886472 +0000 UTC m=+0.200527218 container attach f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 14:11:22 compute-0 ceph-mon[74802]: pgmap v2060: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 15 op/s
Oct 01 14:11:22 compute-0 sshd-session[304098]: ssh_dispatch_run_fatal: Connection from 14.103.205.40 port 57188: Connection timed out [preauth]
Oct 01 14:11:22 compute-0 jolly_northcutt[306847]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:11:22 compute-0 jolly_northcutt[306847]: --> relative data size: 1.0
Oct 01 14:11:22 compute-0 jolly_northcutt[306847]: --> All data devices are unavailable
Oct 01 14:11:22 compute-0 systemd[1]: libpod-f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a.scope: Deactivated successfully.
Oct 01 14:11:22 compute-0 podman[306831]: 2025-10-01 14:11:22.803662969 +0000 UTC m=+1.360303735 container died f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:11:22 compute-0 systemd[1]: libpod-f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a.scope: Consumed 1.119s CPU time.
Oct 01 14:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf59354e2db5676b0c23c52bded9a74912e6fd6a294fce3d3cc2da530ce7ef87-merged.mount: Deactivated successfully.
Oct 01 14:11:22 compute-0 podman[306831]: 2025-10-01 14:11:22.882471911 +0000 UTC m=+1.439112687 container remove f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_northcutt, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:11:22 compute-0 systemd[1]: libpod-conmon-f02045788f4dc56aa4fb7b3ed2a0fc8f562f34bb844bc87c88758a17d694803a.scope: Deactivated successfully.
Oct 01 14:11:22 compute-0 sudo[306725]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:23 compute-0 sudo[306888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:23 compute-0 sudo[306888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:23 compute-0 sudo[306888]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:23 compute-0 sudo[306913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:11:23 compute-0 sudo[306913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:23 compute-0 sudo[306913]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:23 compute-0 sudo[306938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:23 compute-0 sudo[306938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:23 compute-0 sudo[306938]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:23 compute-0 sudo[306963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:11:23 compute-0 sudo[306963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 15 op/s
Oct 01 14:11:23 compute-0 podman[307029]: 2025-10-01 14:11:23.673968208 +0000 UTC m=+0.063968972 container create f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 14:11:23 compute-0 systemd[1]: Started libpod-conmon-f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10.scope.
Oct 01 14:11:23 compute-0 podman[307029]: 2025-10-01 14:11:23.644434491 +0000 UTC m=+0.034435345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:11:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:11:23 compute-0 podman[307029]: 2025-10-01 14:11:23.772118314 +0000 UTC m=+0.162119168 container init f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:11:23 compute-0 podman[307029]: 2025-10-01 14:11:23.784196397 +0000 UTC m=+0.174197191 container start f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct 01 14:11:23 compute-0 podman[307029]: 2025-10-01 14:11:23.78837243 +0000 UTC m=+0.178373224 container attach f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:11:23 compute-0 awesome_mirzakhani[307045]: 167 167
Oct 01 14:11:23 compute-0 systemd[1]: libpod-f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10.scope: Deactivated successfully.
Oct 01 14:11:23 compute-0 podman[307029]: 2025-10-01 14:11:23.792725348 +0000 UTC m=+0.182726172 container died f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 14:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ddb216a67587174f8d89092e4a6d51fff936d9ea1cb001ef8feadab0b6811ca-merged.mount: Deactivated successfully.
Oct 01 14:11:23 compute-0 podman[307029]: 2025-10-01 14:11:23.842761436 +0000 UTC m=+0.232762230 container remove f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:11:23 compute-0 systemd[1]: libpod-conmon-f25982592bac3794cf7c14583217bf6191baad340bd9539229ceae6ba64bfc10.scope: Deactivated successfully.
Oct 01 14:11:24 compute-0 podman[307069]: 2025-10-01 14:11:24.081313 +0000 UTC m=+0.065130269 container create 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:11:24 compute-0 systemd[1]: Started libpod-conmon-5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703.scope.
Oct 01 14:11:24 compute-0 podman[307069]: 2025-10-01 14:11:24.056085829 +0000 UTC m=+0.039903158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:11:24 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:24 compute-0 podman[307069]: 2025-10-01 14:11:24.183787372 +0000 UTC m=+0.167604721 container init 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:11:24 compute-0 podman[307069]: 2025-10-01 14:11:24.197822338 +0000 UTC m=+0.181639617 container start 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:11:24 compute-0 podman[307069]: 2025-10-01 14:11:24.201912308 +0000 UTC m=+0.185729647 container attach 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 14:11:24 compute-0 ceph-mon[74802]: pgmap v2061: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 15 op/s
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]: {
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:     "0": [
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:         {
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "devices": [
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "/dev/loop3"
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             ],
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_name": "ceph_lv0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_size": "21470642176",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "name": "ceph_lv0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "tags": {
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.cluster_name": "ceph",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.crush_device_class": "",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.encrypted": "0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.osd_id": "0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.type": "block",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.vdo": "0"
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             },
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "type": "block",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "vg_name": "ceph_vg0"
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:         }
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:     ],
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:     "1": [
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:         {
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "devices": [
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "/dev/loop4"
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             ],
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_name": "ceph_lv1",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_size": "21470642176",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "name": "ceph_lv1",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "tags": {
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.cluster_name": "ceph",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.crush_device_class": "",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.encrypted": "0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.osd_id": "1",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.type": "block",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.vdo": "0"
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             },
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "type": "block",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "vg_name": "ceph_vg1"
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:         }
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:     ],
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:     "2": [
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:         {
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "devices": [
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "/dev/loop5"
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             ],
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_name": "ceph_lv2",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_size": "21470642176",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "name": "ceph_lv2",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "tags": {
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.cluster_name": "ceph",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.crush_device_class": "",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.encrypted": "0",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.osd_id": "2",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.type": "block",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:                 "ceph.vdo": "0"
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             },
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "type": "block",
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:             "vg_name": "ceph_vg2"
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:         }
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]:     ]
Oct 01 14:11:24 compute-0 busy_hofstadter[307085]: }
Oct 01 14:11:24 compute-0 systemd[1]: libpod-5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703.scope: Deactivated successfully.
Oct 01 14:11:24 compute-0 podman[307069]: 2025-10-01 14:11:24.977847301 +0000 UTC m=+0.961664580 container died 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4c69477ef24ee5dd74512609e090652a8317dd32e3e5853cfe36fec34ece9ec-merged.mount: Deactivated successfully.
Oct 01 14:11:25 compute-0 podman[307069]: 2025-10-01 14:11:25.046441618 +0000 UTC m=+1.030258877 container remove 5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 14:11:25 compute-0 systemd[1]: libpod-conmon-5ee1967806f779a68a5e6745b74e365b3d75d937a525d8cd6f8013c5308a5703.scope: Deactivated successfully.
Oct 01 14:11:25 compute-0 sudo[306963]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:25 compute-0 sudo[307109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:25 compute-0 sudo[307109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:25 compute-0 sudo[307109]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:25 compute-0 sudo[307134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:11:25 compute-0 sudo[307134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:25 compute-0 sudo[307134]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:25 compute-0 sudo[307159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:25 compute-0 sudo[307159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:25 compute-0 sudo[307159]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:25 compute-0 sudo[307184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:11:25 compute-0 sudo[307184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:25 compute-0 ceph-mon[74802]: pgmap v2062: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:25 compute-0 podman[307250]: 2025-10-01 14:11:25.902152965 +0000 UTC m=+0.064570691 container create 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:11:25 compute-0 systemd[1]: Started libpod-conmon-899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217.scope.
Oct 01 14:11:25 compute-0 podman[307250]: 2025-10-01 14:11:25.876339136 +0000 UTC m=+0.038756912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:11:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:11:26 compute-0 podman[307250]: 2025-10-01 14:11:26.004178304 +0000 UTC m=+0.166596080 container init 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct 01 14:11:26 compute-0 podman[307250]: 2025-10-01 14:11:26.016240327 +0000 UTC m=+0.178658043 container start 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:11:26 compute-0 podman[307250]: 2025-10-01 14:11:26.022136314 +0000 UTC m=+0.184554100 container attach 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 14:11:26 compute-0 nice_cerf[307266]: 167 167
Oct 01 14:11:26 compute-0 systemd[1]: libpod-899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217.scope: Deactivated successfully.
Oct 01 14:11:26 compute-0 podman[307250]: 2025-10-01 14:11:26.024382775 +0000 UTC m=+0.186800541 container died 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 01 14:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eab6b679010a3f03567aa91e30e52a1b817a401add7b37220868c7ecd416e4a-merged.mount: Deactivated successfully.
Oct 01 14:11:26 compute-0 podman[307250]: 2025-10-01 14:11:26.076256052 +0000 UTC m=+0.238673748 container remove 899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 14:11:26 compute-0 systemd[1]: libpod-conmon-899affd05e93fa01518927a1203fb25c548bbad7508598033dd051374ec9f217.scope: Deactivated successfully.
Oct 01 14:11:26 compute-0 podman[307288]: 2025-10-01 14:11:26.334990366 +0000 UTC m=+0.067865986 container create aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:11:26 compute-0 systemd[1]: Started libpod-conmon-aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953.scope.
Oct 01 14:11:26 compute-0 podman[307288]: 2025-10-01 14:11:26.306799321 +0000 UTC m=+0.039675001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:11:26 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:11:26 compute-0 podman[307288]: 2025-10-01 14:11:26.452879249 +0000 UTC m=+0.185754869 container init aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:11:26 compute-0 podman[307288]: 2025-10-01 14:11:26.464409865 +0000 UTC m=+0.197285495 container start aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 14:11:26 compute-0 podman[307288]: 2025-10-01 14:11:26.468546206 +0000 UTC m=+0.201421836 container attach aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]: {
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "osd_id": 0,
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "type": "bluestore"
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:     },
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "osd_id": 2,
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "type": "bluestore"
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:     },
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "osd_id": 1,
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:         "type": "bluestore"
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]:     }
Oct 01 14:11:27 compute-0 vigorous_liskov[307305]: }
Oct 01 14:11:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:27 compute-0 systemd[1]: libpod-aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953.scope: Deactivated successfully.
Oct 01 14:11:27 compute-0 podman[307288]: 2025-10-01 14:11:27.5283037 +0000 UTC m=+1.261179330 container died aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 14:11:27 compute-0 systemd[1]: libpod-aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953.scope: Consumed 1.075s CPU time.
Oct 01 14:11:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ebe1e967ebcd2cf3c519c7da486bbb23eca9783c8bc9168fb9cea761b76e4a0-merged.mount: Deactivated successfully.
Oct 01 14:11:27 compute-0 podman[307288]: 2025-10-01 14:11:27.58721971 +0000 UTC m=+1.320095300 container remove aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 14:11:27 compute-0 systemd[1]: libpod-conmon-aa1df56710f02d2985310fa4a1142b543d3b995474eff3060beb5438207b2953.scope: Deactivated successfully.
Oct 01 14:11:27 compute-0 sudo[307184]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:11:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:11:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:27 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 47512944-7147-40bd-b3a9-0145af19326e does not exist
Oct 01 14:11:27 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev a65d0a9c-bc0b-41ce-a888-395a980a1b96 does not exist
Oct 01 14:11:27 compute-0 sudo[307348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:11:27 compute-0 sudo[307348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:27 compute-0 sudo[307348]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:27 compute-0 sudo[307373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:11:27 compute-0 sudo[307373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:11:27 compute-0 sudo[307373]: pam_unix(sudo:session): session closed for user root
Oct 01 14:11:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:28 compute-0 ceph-mon[74802]: pgmap v2063: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:11:29 compute-0 nova_compute[260022]: 2025-10-01 14:11:29.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:29 compute-0 nova_compute[260022]: 2025-10-01 14:11:29.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:11:29 compute-0 nova_compute[260022]: 2025-10-01 14:11:29.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:29 compute-0 nova_compute[260022]: 2025-10-01 14:11:29.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:11:29 compute-0 nova_compute[260022]: 2025-10-01 14:11:29.386 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:11:29 compute-0 nova_compute[260022]: 2025-10-01 14:11:29.387 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:11:29 compute-0 nova_compute[260022]: 2025-10-01 14:11:29.387 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:11:29 compute-0 nova_compute[260022]: 2025-10-01 14:11:29.387 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:11:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:29 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:11:29 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150587183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:11:29 compute-0 nova_compute[260022]: 2025-10-01 14:11:29.830 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.036 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.038 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.038 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.038 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.126 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.141 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.141 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.142 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.315 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.418 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.419 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.431 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.460 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.504 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:11:30 compute-0 ceph-mon[74802]: pgmap v2064: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:30 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2150587183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:11:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:11:30 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3747894677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.930 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.936 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.953 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.957 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:11:30 compute-0 nova_compute[260022]: 2025-10-01 14:11:30.957 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:11:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:31 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3747894677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:11:32 compute-0 ceph-mon[74802]: pgmap v2065: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:32 compute-0 nova_compute[260022]: 2025-10-01 14:11:32.954 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:32 compute-0 nova_compute[260022]: 2025-10-01 14:11:32.955 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:33 compute-0 ceph-mon[74802]: pgmap v2066: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:34 compute-0 nova_compute[260022]: 2025-10-01 14:11:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:36 compute-0 nova_compute[260022]: 2025-10-01 14:11:36.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:36 compute-0 ceph-mon[74802]: pgmap v2067: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:36 compute-0 nova_compute[260022]: 2025-10-01 14:11:36.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:37 compute-0 nova_compute[260022]: 2025-10-01 14:11:37.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:38 compute-0 nova_compute[260022]: 2025-10-01 14:11:38.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:11:38 compute-0 nova_compute[260022]: 2025-10-01 14:11:38.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:11:38 compute-0 nova_compute[260022]: 2025-10-01 14:11:38.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:11:38 compute-0 nova_compute[260022]: 2025-10-01 14:11:38.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:11:38 compute-0 ceph-mon[74802]: pgmap v2068: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:39 compute-0 podman[307445]: 2025-10-01 14:11:39.536382188 +0000 UTC m=+0.065663765 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 14:11:39 compute-0 podman[307443]: 2025-10-01 14:11:39.537011068 +0000 UTC m=+0.076578422 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:11:39 compute-0 podman[307444]: 2025-10-01 14:11:39.574807458 +0000 UTC m=+0.106365137 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:11:39 compute-0 podman[307442]: 2025-10-01 14:11:39.581661605 +0000 UTC m=+0.121005792 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, config_id=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 14:11:39 compute-0 ceph-mon[74802]: pgmap v2069: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:42 compute-0 ceph-mon[74802]: pgmap v2070: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:43 compute-0 ceph-mon[74802]: pgmap v2071: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:45 compute-0 ceph-mon[74802]: pgmap v2072: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:11:47
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct 01 14:11:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:11:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:11:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:11:48 compute-0 ceph-mon[74802]: pgmap v2073: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:49 compute-0 ceph-mon[74802]: pgmap v2074: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:52 compute-0 ceph-mon[74802]: pgmap v2075: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:53 compute-0 ceph-mon[74802]: pgmap v2076: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:11:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282206593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:11:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:11:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3282206593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:11:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3282206593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:11:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3282206593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:11:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.321674) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916321699, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2091, "num_deletes": 257, "total_data_size": 3489892, "memory_usage": 3551808, "flush_reason": "Manual Compaction"}
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct 01 14:11:56 compute-0 ceph-mon[74802]: pgmap v2077: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916339798, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 3411670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39968, "largest_seqno": 42058, "table_properties": {"data_size": 3402017, "index_size": 6147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19408, "raw_average_key_size": 20, "raw_value_size": 3382848, "raw_average_value_size": 3557, "num_data_blocks": 272, "num_entries": 951, "num_filter_entries": 951, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327694, "oldest_key_time": 1759327694, "file_creation_time": 1759327916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 18356 microseconds, and 9308 cpu microseconds.
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.340018) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 3411670 bytes OK
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.340086) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.341564) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.341581) EVENT_LOG_v1 {"time_micros": 1759327916341576, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.341609) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3481106, prev total WAL file size 3481106, number of live WAL files 2.
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.343224) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(3331KB)], [95(6557KB)]
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916343302, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10126749, "oldest_snapshot_seqno": -1}
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5882 keys, 8376710 bytes, temperature: kUnknown
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916381421, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 8376710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8337984, "index_size": 22936, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 152279, "raw_average_key_size": 25, "raw_value_size": 8231972, "raw_average_value_size": 1399, "num_data_blocks": 910, "num_entries": 5882, "num_filter_entries": 5882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759327916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.381805) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 8376710 bytes
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.383276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 264.8 rd, 219.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(5.4) write-amplify(2.5) OK, records in: 6408, records dropped: 526 output_compression: NoCompression
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.383303) EVENT_LOG_v1 {"time_micros": 1759327916383286, "job": 56, "event": "compaction_finished", "compaction_time_micros": 38249, "compaction_time_cpu_micros": 19097, "output_level": 6, "num_output_files": 1, "total_output_size": 8376710, "num_input_records": 6408, "num_output_records": 5882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916384582, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759327916386415, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.343088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:11:56 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:11:56.386602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:11:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:11:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:11:58 compute-0 ceph-mon[74802]: pgmap v2078: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:11:59 compute-0 ceph-mon[74802]: pgmap v2079: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:02 compute-0 ceph-mon[74802]: pgmap v2080: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:03 compute-0 ceph-mon[74802]: pgmap v2081: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:06 compute-0 ceph-mon[74802]: pgmap v2082: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:07 compute-0 ceph-mon[74802]: pgmap v2083: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:10 compute-0 podman[307526]: 2025-10-01 14:12:10.507877348 +0000 UTC m=+0.056933297 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 01 14:12:10 compute-0 podman[307527]: 2025-10-01 14:12:10.533455391 +0000 UTC m=+0.076373345 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250923)
Oct 01 14:12:10 compute-0 podman[307533]: 2025-10-01 14:12:10.550058448 +0000 UTC m=+0.088010085 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:12:10 compute-0 podman[307525]: 2025-10-01 14:12:10.579440831 +0000 UTC m=+0.124387900 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Oct 01 14:12:10 compute-0 ceph-mon[74802]: pgmap v2084: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:11 compute-0 ceph-mon[74802]: pgmap v2085: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:12:12.337 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:12:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:12:12.338 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:12:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:12:12.338 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:12:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:13 compute-0 ceph-mon[74802]: pgmap v2086: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Oct 01 14:12:14 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Oct 01 14:12:14 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Oct 01 14:12:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Oct 01 14:12:15 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Oct 01 14:12:15 compute-0 ceph-mon[74802]: osdmap e194: 3 total, 3 up, 3 in
Oct 01 14:12:15 compute-0 ceph-mon[74802]: pgmap v2088: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:15 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Oct 01 14:12:16 compute-0 ceph-mon[74802]: osdmap e195: 3 total, 3 up, 3 in
Oct 01 14:12:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 8.4 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.0 MiB/s wr, 39 op/s
Oct 01 14:12:17 compute-0 ceph-mon[74802]: pgmap v2090: 305 pgs: 305 active+clean; 8.4 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.0 MiB/s wr, 39 op/s
Oct 01 14:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:12:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:12:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:19 compute-0 nova_compute[260022]: 2025-10-01 14:12:19.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:12:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 14:12:20 compute-0 ceph-mon[74802]: pgmap v2091: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 14:12:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 14:12:21 compute-0 ceph-mon[74802]: pgmap v2092: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct 01 14:12:22 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 43 op/s
Oct 01 14:12:23 compute-0 ceph-mon[74802]: pgmap v2093: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 43 op/s
Oct 01 14:12:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 01 14:12:25 compute-0 ceph-mon[74802]: pgmap v2094: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct 01 14:12:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.5 MiB/s wr, 32 op/s
Oct 01 14:12:27 compute-0 ceph-mon[74802]: pgmap v2095: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.5 MiB/s wr, 32 op/s
Oct 01 14:12:27 compute-0 sudo[307606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:12:27 compute-0 sudo[307606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:27 compute-0 sudo[307606]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:27 compute-0 sudo[307631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:12:27 compute-0 sudo[307631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:27 compute-0 sudo[307631]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:28 compute-0 sudo[307656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:12:28 compute-0 sudo[307656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:28 compute-0 sudo[307656]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:28 compute-0 sudo[307681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:12:28 compute-0 sudo[307681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:28 compute-0 sudo[307681]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:12:28 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:12:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:12:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:12:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:12:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:12:28 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 4b661462-b6c3-481a-81f9-80ef15731ef0 does not exist
Oct 01 14:12:28 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 8325ef5d-0bbb-44cb-9fe6-79799f818864 does not exist
Oct 01 14:12:28 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 60d36cd9-7da2-457d-a750-2da46646cc1a does not exist
Oct 01 14:12:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:12:28 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:12:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:12:28 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:12:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:12:28 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:12:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:12:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:12:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:12:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:12:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:12:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:12:28 compute-0 sudo[307738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:12:28 compute-0 sudo[307738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:28 compute-0 sudo[307738]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:28 compute-0 sudo[307763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:12:28 compute-0 sudo[307763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:28 compute-0 sudo[307763]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:28 compute-0 sudo[307788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:12:28 compute-0 sudo[307788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:28 compute-0 sudo[307788]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:29 compute-0 sudo[307813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:12:29 compute-0 sudo[307813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:29 compute-0 podman[307878]: 2025-10-01 14:12:29.473319552 +0000 UTC m=+0.083304516 container create 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct 01 14:12:29 compute-0 systemd[1]: Started libpod-conmon-7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2.scope.
Oct 01 14:12:29 compute-0 podman[307878]: 2025-10-01 14:12:29.436524393 +0000 UTC m=+0.046509427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:12:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 2.7 MiB/s wr, 5 op/s
Oct 01 14:12:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:12:29 compute-0 podman[307878]: 2025-10-01 14:12:29.572084367 +0000 UTC m=+0.182069401 container init 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 14:12:29 compute-0 podman[307878]: 2025-10-01 14:12:29.582755986 +0000 UTC m=+0.192740960 container start 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:12:29 compute-0 podman[307878]: 2025-10-01 14:12:29.586602488 +0000 UTC m=+0.196587472 container attach 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:12:29 compute-0 mystifying_shannon[307894]: 167 167
Oct 01 14:12:29 compute-0 systemd[1]: libpod-7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2.scope: Deactivated successfully.
Oct 01 14:12:29 compute-0 conmon[307894]: conmon 7c539686d3fe395db8ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2.scope/container/memory.events
Oct 01 14:12:29 compute-0 podman[307878]: 2025-10-01 14:12:29.593220678 +0000 UTC m=+0.203205662 container died 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 01 14:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc4e9b63f5609a0efc54d6d82b33998e9bc2020f0098239692bd8738e0cce8b2-merged.mount: Deactivated successfully.
Oct 01 14:12:29 compute-0 podman[307878]: 2025-10-01 14:12:29.646716486 +0000 UTC m=+0.256701420 container remove 7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 01 14:12:29 compute-0 systemd[1]: libpod-conmon-7c539686d3fe395db8aeb6efc3a868ed5bd7aeb7e0acdafb6e65fcd9e26580a2.scope: Deactivated successfully.
Oct 01 14:12:29 compute-0 ceph-mon[74802]: pgmap v2096: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 2.7 MiB/s wr, 5 op/s
Oct 01 14:12:29 compute-0 podman[307918]: 2025-10-01 14:12:29.852405276 +0000 UTC m=+0.043777131 container create f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:12:29 compute-0 systemd[1]: Started libpod-conmon-f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5.scope.
Oct 01 14:12:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:29 compute-0 podman[307918]: 2025-10-01 14:12:29.83711318 +0000 UTC m=+0.028485065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:29 compute-0 podman[307918]: 2025-10-01 14:12:29.944228231 +0000 UTC m=+0.135600186 container init f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:12:29 compute-0 podman[307918]: 2025-10-01 14:12:29.964183745 +0000 UTC m=+0.155555640 container start f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 01 14:12:29 compute-0 podman[307918]: 2025-10-01 14:12:29.969014728 +0000 UTC m=+0.160386673 container attach f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:12:30 compute-0 nova_compute[260022]: 2025-10-01 14:12:30.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:12:30 compute-0 nova_compute[260022]: 2025-10-01 14:12:30.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:12:30 compute-0 nova_compute[260022]: 2025-10-01 14:12:30.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:12:30 compute-0 nova_compute[260022]: 2025-10-01 14:12:30.376 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:12:30 compute-0 nova_compute[260022]: 2025-10-01 14:12:30.376 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:12:30 compute-0 nova_compute[260022]: 2025-10-01 14:12:30.377 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:12:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:12:30 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203281331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:12:30 compute-0 nova_compute[260022]: 2025-10-01 14:12:30.832 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:12:30 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2203281331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:12:31 compute-0 interesting_wright[307934]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:12:31 compute-0 interesting_wright[307934]: --> relative data size: 1.0
Oct 01 14:12:31 compute-0 interesting_wright[307934]: --> All data devices are unavailable
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.027 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.028 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4962MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.028 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.029 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:12:31 compute-0 systemd[1]: libpod-f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5.scope: Deactivated successfully.
Oct 01 14:12:31 compute-0 systemd[1]: libpod-f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5.scope: Consumed 1.026s CPU time.
Oct 01 14:12:31 compute-0 podman[307918]: 2025-10-01 14:12:31.056068388 +0000 UTC m=+1.247440243 container died f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-37aacb5c7bc37db9d4429d351982db2d147863741a00cee11a279c70b2f69700-merged.mount: Deactivated successfully.
Oct 01 14:12:31 compute-0 podman[307918]: 2025-10-01 14:12:31.110068921 +0000 UTC m=+1.301440786 container remove f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.117 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:12:31 compute-0 systemd[1]: libpod-conmon-f7e42c157c726a57459552cc0cbca68a5e1466c949847357ffe8b0e26b373df5.scope: Deactivated successfully.
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.137 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.137 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.137 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:12:31 compute-0 sudo[307813]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.182 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:12:31 compute-0 sudo[307998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:12:31 compute-0 sudo[307998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:31 compute-0 sudo[307998]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:31 compute-0 sudo[308024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:12:31 compute-0 sudo[308024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:31 compute-0 sudo[308024]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:31 compute-0 sudo[308049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:12:31 compute-0 sudo[308049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:31 compute-0 sudo[308049]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:31 compute-0 sudo[308093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:12:31 compute-0 sudo[308093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:12:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986016335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.615 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.624 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.643 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.645 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:12:31 compute-0 nova_compute[260022]: 2025-10-01 14:12:31.646 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:12:31 compute-0 podman[308161]: 2025-10-01 14:12:31.8055503 +0000 UTC m=+0.062396972 container create 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 14:12:31 compute-0 systemd[1]: Started libpod-conmon-251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2.scope.
Oct 01 14:12:31 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:12:31 compute-0 podman[308161]: 2025-10-01 14:12:31.781152596 +0000 UTC m=+0.037999318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:12:31 compute-0 ceph-mon[74802]: pgmap v2097: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:31 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/986016335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:12:31 compute-0 podman[308161]: 2025-10-01 14:12:31.88367985 +0000 UTC m=+0.140526542 container init 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 14:12:31 compute-0 podman[308161]: 2025-10-01 14:12:31.890206298 +0000 UTC m=+0.147052970 container start 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:12:31 compute-0 podman[308161]: 2025-10-01 14:12:31.894493904 +0000 UTC m=+0.151340636 container attach 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 14:12:31 compute-0 wonderful_wing[308177]: 167 167
Oct 01 14:12:31 compute-0 systemd[1]: libpod-251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2.scope: Deactivated successfully.
Oct 01 14:12:31 compute-0 podman[308161]: 2025-10-01 14:12:31.897409356 +0000 UTC m=+0.154256038 container died 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct 01 14:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ba4a732b8be5bf5d292dd78d042413295666695a23026614d5b3fe0e4e2a759-merged.mount: Deactivated successfully.
Oct 01 14:12:31 compute-0 podman[308161]: 2025-10-01 14:12:31.94164055 +0000 UTC m=+0.198487192 container remove 251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:12:31 compute-0 systemd[1]: libpod-conmon-251118b23a16f2c8de0ca8d57349f193e5d3cca94a3516e2503385790ccbf6a2.scope: Deactivated successfully.
Oct 01 14:12:32 compute-0 podman[308201]: 2025-10-01 14:12:32.141200466 +0000 UTC m=+0.053538261 container create dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 14:12:32 compute-0 systemd[1]: Started libpod-conmon-dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68.scope.
Oct 01 14:12:32 compute-0 podman[308201]: 2025-10-01 14:12:32.113772035 +0000 UTC m=+0.026109860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:12:32 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:32 compute-0 podman[308201]: 2025-10-01 14:12:32.257781016 +0000 UTC m=+0.170118821 container init dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct 01 14:12:32 compute-0 podman[308201]: 2025-10-01 14:12:32.26984705 +0000 UTC m=+0.182184875 container start dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 14:12:32 compute-0 podman[308201]: 2025-10-01 14:12:32.274461566 +0000 UTC m=+0.186799371 container attach dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:12:32 compute-0 nova_compute[260022]: 2025-10-01 14:12:32.647 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:12:32 compute-0 nova_compute[260022]: 2025-10-01 14:12:32.648 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:12:32 compute-0 nova_compute[260022]: 2025-10-01 14:12:32.649 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:12:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]: {
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:     "0": [
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:         {
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "devices": [
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "/dev/loop3"
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             ],
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_name": "ceph_lv0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_size": "21470642176",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "name": "ceph_lv0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "tags": {
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.cluster_name": "ceph",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.crush_device_class": "",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.encrypted": "0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.osd_id": "0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.type": "block",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.vdo": "0"
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             },
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "type": "block",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "vg_name": "ceph_vg0"
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:         }
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:     ],
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:     "1": [
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:         {
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "devices": [
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "/dev/loop4"
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             ],
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_name": "ceph_lv1",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_size": "21470642176",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "name": "ceph_lv1",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "tags": {
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.cluster_name": "ceph",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.crush_device_class": "",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.encrypted": "0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.osd_id": "1",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.type": "block",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.vdo": "0"
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             },
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "type": "block",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "vg_name": "ceph_vg1"
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:         }
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:     ],
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:     "2": [
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:         {
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "devices": [
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "/dev/loop5"
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             ],
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_name": "ceph_lv2",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_size": "21470642176",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "name": "ceph_lv2",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "tags": {
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.cluster_name": "ceph",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.crush_device_class": "",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.encrypted": "0",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.osd_id": "2",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.type": "block",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:                 "ceph.vdo": "0"
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             },
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "type": "block",
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:             "vg_name": "ceph_vg2"
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:         }
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]:     ]
Oct 01 14:12:33 compute-0 hopeful_dhawan[308217]: }
Oct 01 14:12:33 compute-0 systemd[1]: libpod-dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68.scope: Deactivated successfully.
Oct 01 14:12:33 compute-0 podman[308201]: 2025-10-01 14:12:33.067189982 +0000 UTC m=+0.979527777 container died dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 14:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d77a3730ff868898ad2f3ea5ec5d9d76384310843071ae0c310c0dc196f63207-merged.mount: Deactivated successfully.
Oct 01 14:12:33 compute-0 podman[308201]: 2025-10-01 14:12:33.125184424 +0000 UTC m=+1.037522219 container remove dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 14:12:33 compute-0 systemd[1]: libpod-conmon-dd232aa7dca824d0b6f04f415fb143131f4c85766100453949cee030a0384f68.scope: Deactivated successfully.
Oct 01 14:12:33 compute-0 sudo[308093]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:33 compute-0 sudo[308239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:12:33 compute-0 sudo[308239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:33 compute-0 sudo[308239]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:33 compute-0 sudo[308264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:12:33 compute-0 sudo[308264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:33 compute-0 sudo[308264]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:33 compute-0 nova_compute[260022]: 2025-10-01 14:12:33.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:12:33 compute-0 sudo[308289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:12:33 compute-0 sudo[308289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:33 compute-0 sudo[308289]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:33 compute-0 sudo[308314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:12:33 compute-0 sudo[308314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:33 compute-0 ceph-mon[74802]: pgmap v2098: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:33 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:12:33.710 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:12:33 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:12:33.712 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:12:33 compute-0 podman[308379]: 2025-10-01 14:12:33.963767245 +0000 UTC m=+0.073177074 container create 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:12:34 compute-0 systemd[1]: Started libpod-conmon-707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b.scope.
Oct 01 14:12:34 compute-0 podman[308379]: 2025-10-01 14:12:33.936029454 +0000 UTC m=+0.045439343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:12:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:12:34 compute-0 podman[308379]: 2025-10-01 14:12:34.077759803 +0000 UTC m=+0.187169732 container init 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:12:34 compute-0 podman[308379]: 2025-10-01 14:12:34.088309129 +0000 UTC m=+0.197718958 container start 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 14:12:34 compute-0 podman[308379]: 2025-10-01 14:12:34.092659687 +0000 UTC m=+0.202069616 container attach 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 14:12:34 compute-0 mystifying_elbakyan[308395]: 167 167
Oct 01 14:12:34 compute-0 systemd[1]: libpod-707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b.scope: Deactivated successfully.
Oct 01 14:12:34 compute-0 podman[308379]: 2025-10-01 14:12:34.097218921 +0000 UTC m=+0.206628760 container died 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b94264f0cbc97a1b45247f2f97b63eaa5bf4803e54012846e92225b9a1ad8b6f-merged.mount: Deactivated successfully.
Oct 01 14:12:34 compute-0 podman[308379]: 2025-10-01 14:12:34.149814231 +0000 UTC m=+0.259224020 container remove 707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 01 14:12:34 compute-0 systemd[1]: libpod-conmon-707c7f39ed03e3906acf17449b327fc526a2933224a4aed592e52999125c331b.scope: Deactivated successfully.
Oct 01 14:12:34 compute-0 nova_compute[260022]: 2025-10-01 14:12:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:12:34 compute-0 podman[308419]: 2025-10-01 14:12:34.328162723 +0000 UTC m=+0.031326056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:12:34 compute-0 podman[308419]: 2025-10-01 14:12:34.445398135 +0000 UTC m=+0.148561388 container create 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:12:34 compute-0 systemd[1]: Started libpod-conmon-3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc.scope.
Oct 01 14:12:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:12:34 compute-0 podman[308419]: 2025-10-01 14:12:34.704331555 +0000 UTC m=+0.407494838 container init 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct 01 14:12:34 compute-0 podman[308419]: 2025-10-01 14:12:34.716098758 +0000 UTC m=+0.419262021 container start 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:12:34 compute-0 podman[308419]: 2025-10-01 14:12:34.723850655 +0000 UTC m=+0.427013978 container attach 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 14:12:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:35 compute-0 ceph-mon[74802]: pgmap v2099: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:35 compute-0 eager_lederberg[308436]: {
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "osd_id": 0,
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "type": "bluestore"
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:     },
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "osd_id": 2,
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "type": "bluestore"
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:     },
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "osd_id": 1,
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:         "type": "bluestore"
Oct 01 14:12:35 compute-0 eager_lederberg[308436]:     }
Oct 01 14:12:35 compute-0 eager_lederberg[308436]: }
Oct 01 14:12:35 compute-0 systemd[1]: libpod-3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc.scope: Deactivated successfully.
Oct 01 14:12:35 compute-0 podman[308419]: 2025-10-01 14:12:35.742643197 +0000 UTC m=+1.445806420 container died 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:12:35 compute-0 systemd[1]: libpod-3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc.scope: Consumed 1.034s CPU time.
Oct 01 14:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2e7be7f68b5d26b6aa1aa44c0cc2b86737e791943b294abc25375bb72783044-merged.mount: Deactivated successfully.
Oct 01 14:12:36 compute-0 podman[308419]: 2025-10-01 14:12:36.040985818 +0000 UTC m=+1.744149091 container remove 3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:12:36 compute-0 systemd[1]: libpod-conmon-3780d73c681e7862b665a5a7eeeedc82a73b052b9bebb3652c4a8690c3f577fc.scope: Deactivated successfully.
Oct 01 14:12:36 compute-0 sudo[308314]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:12:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:12:36 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:12:36 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:12:36 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 444afb38-9def-4805-b08e-2caa0cf06be1 does not exist
Oct 01 14:12:36 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 4b2a4260-7777-4c9e-8298-254742c51710 does not exist
Oct 01 14:12:36 compute-0 sudo[308483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:12:36 compute-0 sudo[308483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:36 compute-0 sudo[308483]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:36 compute-0 sudo[308508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:12:36 compute-0 sudo[308508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:12:36 compute-0 sudo[308508]: pam_unix(sudo:session): session closed for user root
Oct 01 14:12:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:12:37 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:12:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:38 compute-0 ceph-mon[74802]: pgmap v2100: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:38 compute-0 nova_compute[260022]: 2025-10-01 14:12:38.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:12:38 compute-0 nova_compute[260022]: 2025-10-01 14:12:38.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:12:38 compute-0 nova_compute[260022]: 2025-10-01 14:12:38.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:12:38 compute-0 nova_compute[260022]: 2025-10-01 14:12:38.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:12:38 compute-0 nova_compute[260022]: 2025-10-01 14:12:38.360 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:12:38 compute-0 nova_compute[260022]: 2025-10-01 14:12:38.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:12:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:39 compute-0 ceph-mon[74802]: pgmap v2101: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Oct 01 14:12:41 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Oct 01 14:12:41 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Oct 01 14:12:41 compute-0 podman[308536]: 2025-10-01 14:12:41.51765614 +0000 UTC m=+0.057480116 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 01 14:12:41 compute-0 podman[308535]: 2025-10-01 14:12:41.519021483 +0000 UTC m=+0.062417732 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 01 14:12:41 compute-0 podman[308534]: 2025-10-01 14:12:41.524331902 +0000 UTC m=+0.071229192 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:12:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:41 compute-0 podman[308533]: 2025-10-01 14:12:41.570938142 +0000 UTC m=+0.118643878 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 01 14:12:41 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:12:41.714 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:12:42 compute-0 ceph-mon[74802]: osdmap e196: 3 total, 3 up, 3 in
Oct 01 14:12:42 compute-0 ceph-mon[74802]: pgmap v2103: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:42 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:12:43 compute-0 ceph-mon[74802]: pgmap v2104: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:12:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:12:45 compute-0 ceph-mon[74802]: pgmap v2105: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:12:47 compute-0 ceph-mon[74802]: pgmap v2106: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:12:47
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', '.mgr', 'backups', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct 01 14:12:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:12:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Oct 01 14:12:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Oct 01 14:12:48 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:12:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:12:49 compute-0 ceph-mon[74802]: osdmap e197: 3 total, 3 up, 3 in
Oct 01 14:12:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Oct 01 14:12:50 compute-0 ceph-mon[74802]: pgmap v2108: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Oct 01 14:12:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:12:51 compute-0 ceph-mon[74802]: pgmap v2109: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct 01 14:12:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Oct 01 14:12:53 compute-0 ceph-mon[74802]: pgmap v2110: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Oct 01 14:12:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:12:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3931144448' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:12:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:12:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3931144448' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:12:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3931144448' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:12:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3931144448' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:12:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Oct 01 14:12:56 compute-0 ceph-mon[74802]: pgmap v2111: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 op/s
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:57 compute-0 ceph-mon[74802]: pgmap v2112: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:12:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:12:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:12:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:12:59 compute-0 ceph-mon[74802]: pgmap v2113: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:01 compute-0 nova_compute[260022]: 2025-10-01 14:13:01.357 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:01 compute-0 ceph-mon[74802]: pgmap v2114: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:03 compute-0 ceph-mon[74802]: pgmap v2115: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:05 compute-0 ceph-mon[74802]: pgmap v2116: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:07 compute-0 ceph-mon[74802]: pgmap v2117: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:07 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:09 compute-0 ceph-mon[74802]: pgmap v2118: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:11 compute-0 ceph-mon[74802]: pgmap v2119: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:13:12.338 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:13:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:13:12.339 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:13:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:13:12.339 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:13:12 compute-0 podman[308618]: 2025-10-01 14:13:12.515508514 +0000 UTC m=+0.065738077 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:13:12 compute-0 podman[308625]: 2025-10-01 14:13:12.53081059 +0000 UTC m=+0.073453623 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:13:12 compute-0 podman[308617]: 2025-10-01 14:13:12.546568 +0000 UTC m=+0.102894557 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 01 14:13:12 compute-0 podman[308619]: 2025-10-01 14:13:12.54656771 +0000 UTC m=+0.093698965 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 14:13:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:13 compute-0 ceph-mon[74802]: pgmap v2120: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:15 compute-0 ceph-mon[74802]: pgmap v2121: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:17 compute-0 ceph-mon[74802]: pgmap v2122: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:13:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:13:17 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:19 compute-0 ceph-mon[74802]: pgmap v2123: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:21 compute-0 nova_compute[260022]: 2025-10-01 14:13:21.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:21 compute-0 ceph-mon[74802]: pgmap v2124: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:23 compute-0 ceph-mon[74802]: pgmap v2125: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:25 compute-0 ceph-mon[74802]: pgmap v2126: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:27 compute-0 ceph-mon[74802]: pgmap v2127: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:29 compute-0 ceph-mon[74802]: pgmap v2128: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:30 compute-0 nova_compute[260022]: 2025-10-01 14:13:30.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:30 compute-0 nova_compute[260022]: 2025-10-01 14:13:30.373 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:13:30 compute-0 nova_compute[260022]: 2025-10-01 14:13:30.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:13:30 compute-0 nova_compute[260022]: 2025-10-01 14:13:30.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:13:30 compute-0 nova_compute[260022]: 2025-10-01 14:13:30.375 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:13:30 compute-0 nova_compute[260022]: 2025-10-01 14:13:30.375 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:13:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:13:30 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1123375352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:13:30 compute-0 nova_compute[260022]: 2025-10-01 14:13:30.882 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:13:30 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1123375352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.073 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.074 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5026MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.074 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.075 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.173 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.187 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.187 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.187 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.232 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:13:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:13:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644878861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.665 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.672 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.693 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.695 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:13:31 compute-0 nova_compute[260022]: 2025-10-01 14:13:31.695 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:13:31 compute-0 ceph-mon[74802]: pgmap v2129: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:31 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/644878861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:13:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:33 compute-0 ceph-mon[74802]: pgmap v2130: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:33 compute-0 nova_compute[260022]: 2025-10-01 14:13:33.692 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:33 compute-0 nova_compute[260022]: 2025-10-01 14:13:33.693 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:33 compute-0 nova_compute[260022]: 2025-10-01 14:13:33.693 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:13:34 compute-0 nova_compute[260022]: 2025-10-01 14:13:34.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:35 compute-0 nova_compute[260022]: 2025-10-01 14:13:35.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:35 compute-0 ceph-mon[74802]: pgmap v2131: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:36 compute-0 sudo[308741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:36 compute-0 sudo[308741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:36 compute-0 sudo[308741]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:36 compute-0 sudo[308766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:13:36 compute-0 sudo[308766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:36 compute-0 sudo[308766]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:36 compute-0 sudo[308791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:36 compute-0 sudo[308791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:36 compute-0 sudo[308791]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:36 compute-0 sudo[308816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 01 14:13:36 compute-0 sudo[308816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:37 compute-0 sudo[308816]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:13:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:37 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:13:37 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:37 compute-0 sudo[308863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:37 compute-0 sudo[308863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:37 compute-0 sudo[308863]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:37 compute-0 sudo[308888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:13:37 compute-0 sudo[308888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:37 compute-0 sudo[308888]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:37 compute-0 sudo[308913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:37 compute-0 sudo[308913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:37 compute-0 sudo[308913]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:37 compute-0 sudo[308938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:13:37 compute-0 sudo[308938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:37 compute-0 sudo[308938]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct 01 14:13:38 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:13:38 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:13:38 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:13:38 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:38 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 1bb09119-f937-4a0d-be16-2a75d1999a88 does not exist
Oct 01 14:13:38 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 286ed1f2-f6b6-4b5e-ae4f-9bd6a4f14a19 does not exist
Oct 01 14:13:38 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e799a6af-8ab2-4d12-96ed-e567873bcdc4 does not exist
Oct 01 14:13:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:13:38 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:13:38 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:13:38 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:38 compute-0 ceph-mon[74802]: pgmap v2132: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:13:38 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:13:38 compute-0 sudo[308994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:38 compute-0 sudo[308994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:38 compute-0 sudo[308994]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:38 compute-0 sudo[309019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:13:38 compute-0 sudo[309019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:38 compute-0 sudo[309019]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:38 compute-0 sudo[309044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:38 compute-0 sudo[309044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:38 compute-0 sudo[309044]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:38 compute-0 sudo[309069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:13:38 compute-0 sudo[309069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:38 compute-0 nova_compute[260022]: 2025-10-01 14:13:38.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:38 compute-0 podman[309136]: 2025-10-01 14:13:38.769895316 +0000 UTC m=+0.071738099 container create b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct 01 14:13:38 compute-0 systemd[1]: Started libpod-conmon-b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348.scope.
Oct 01 14:13:38 compute-0 podman[309136]: 2025-10-01 14:13:38.730279689 +0000 UTC m=+0.032122472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:13:38 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:13:38 compute-0 podman[309136]: 2025-10-01 14:13:38.867081021 +0000 UTC m=+0.168923864 container init b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 14:13:38 compute-0 podman[309136]: 2025-10-01 14:13:38.875308142 +0000 UTC m=+0.177150895 container start b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:13:38 compute-0 podman[309136]: 2025-10-01 14:13:38.879858256 +0000 UTC m=+0.181701079 container attach b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct 01 14:13:38 compute-0 adoring_dirac[309153]: 167 167
Oct 01 14:13:38 compute-0 systemd[1]: libpod-b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348.scope: Deactivated successfully.
Oct 01 14:13:38 compute-0 conmon[309153]: conmon b7954a4594e9520d7ab4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348.scope/container/memory.events
Oct 01 14:13:38 compute-0 podman[309136]: 2025-10-01 14:13:38.884064 +0000 UTC m=+0.185906753 container died b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 14:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d4de5148aa11e4e0c8199f8719592c0896606e4352930e2c555b24b6db758d9-merged.mount: Deactivated successfully.
Oct 01 14:13:38 compute-0 podman[309136]: 2025-10-01 14:13:38.927282852 +0000 UTC m=+0.229125625 container remove b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dirac, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:13:38 compute-0 systemd[1]: libpod-conmon-b7954a4594e9520d7ab4caa3a33a17d81db8ff4f4f0ea72c54464ea4fc179348.scope: Deactivated successfully.
Oct 01 14:13:39 compute-0 podman[309176]: 2025-10-01 14:13:39.146998437 +0000 UTC m=+0.058696614 container create 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:13:39 compute-0 systemd[1]: Started libpod-conmon-5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0.scope.
Oct 01 14:13:39 compute-0 podman[309176]: 2025-10-01 14:13:39.116913373 +0000 UTC m=+0.028611590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:13:39 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:39 compute-0 podman[309176]: 2025-10-01 14:13:39.260626764 +0000 UTC m=+0.172324931 container init 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 14:13:39 compute-0 podman[309176]: 2025-10-01 14:13:39.272699088 +0000 UTC m=+0.184397235 container start 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct 01 14:13:39 compute-0 podman[309176]: 2025-10-01 14:13:39.277073296 +0000 UTC m=+0.188771443 container attach 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:13:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:39 compute-0 ceph-mon[74802]: pgmap v2133: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:40 compute-0 friendly_euclid[309193]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:13:40 compute-0 friendly_euclid[309193]: --> relative data size: 1.0
Oct 01 14:13:40 compute-0 friendly_euclid[309193]: --> All data devices are unavailable
Oct 01 14:13:40 compute-0 nova_compute[260022]: 2025-10-01 14:13:40.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:40 compute-0 nova_compute[260022]: 2025-10-01 14:13:40.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:13:40 compute-0 nova_compute[260022]: 2025-10-01 14:13:40.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:13:40 compute-0 nova_compute[260022]: 2025-10-01 14:13:40.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:13:40 compute-0 nova_compute[260022]: 2025-10-01 14:13:40.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:13:40 compute-0 systemd[1]: libpod-5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0.scope: Deactivated successfully.
Oct 01 14:13:40 compute-0 systemd[1]: libpod-5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0.scope: Consumed 1.054s CPU time.
Oct 01 14:13:40 compute-0 podman[309176]: 2025-10-01 14:13:40.365645884 +0000 UTC m=+1.277344031 container died 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb481ef768718f08ee600c19d34df83755a24cef91b5ec811bfc429071856a8-merged.mount: Deactivated successfully.
Oct 01 14:13:40 compute-0 podman[309176]: 2025-10-01 14:13:40.426245378 +0000 UTC m=+1.337943535 container remove 5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_euclid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:13:40 compute-0 systemd[1]: libpod-conmon-5254c823e72213985c39c94289c7460c873c7eb3843f3b3609f18541d2192bb0.scope: Deactivated successfully.
Oct 01 14:13:40 compute-0 sudo[309069]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:40 compute-0 sudo[309237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:40 compute-0 sudo[309237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:40 compute-0 sudo[309237]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:40 compute-0 sudo[309262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:13:40 compute-0 sudo[309262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:40 compute-0 sudo[309262]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:40 compute-0 sudo[309287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:40 compute-0 sudo[309287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:40 compute-0 sudo[309287]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:40 compute-0 sudo[309312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:13:40 compute-0 sudo[309312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:41 compute-0 podman[309378]: 2025-10-01 14:13:41.129723021 +0000 UTC m=+0.051364022 container create db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 14:13:41 compute-0 systemd[1]: Started libpod-conmon-db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99.scope.
Oct 01 14:13:41 compute-0 podman[309378]: 2025-10-01 14:13:41.111794411 +0000 UTC m=+0.033435432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:13:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:13:41 compute-0 podman[309378]: 2025-10-01 14:13:41.236640714 +0000 UTC m=+0.158281755 container init db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:13:41 compute-0 podman[309378]: 2025-10-01 14:13:41.245783475 +0000 UTC m=+0.167424486 container start db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:13:41 compute-0 podman[309378]: 2025-10-01 14:13:41.248872393 +0000 UTC m=+0.170513434 container attach db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 01 14:13:41 compute-0 musing_kirch[309395]: 167 167
Oct 01 14:13:41 compute-0 systemd[1]: libpod-db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99.scope: Deactivated successfully.
Oct 01 14:13:41 compute-0 podman[309378]: 2025-10-01 14:13:41.250935728 +0000 UTC m=+0.172576769 container died db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 14:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-09d43343032ceb89a3e24d211dd4aaa15d8d5f4c0e14e9a2a8feec812f93950b-merged.mount: Deactivated successfully.
Oct 01 14:13:41 compute-0 podman[309378]: 2025-10-01 14:13:41.302808736 +0000 UTC m=+0.224449777 container remove db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 14:13:41 compute-0 systemd[1]: libpod-conmon-db6efbea5d3c9999bba7ecd2663c7cf963d2ca3a0fad2788dc66eace01da7c99.scope: Deactivated successfully.
Oct 01 14:13:41 compute-0 podman[309419]: 2025-10-01 14:13:41.556204179 +0000 UTC m=+0.047093746 container create 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:13:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:41 compute-0 systemd[1]: Started libpod-conmon-8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e.scope.
Oct 01 14:13:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:41 compute-0 podman[309419]: 2025-10-01 14:13:41.537895679 +0000 UTC m=+0.028785296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:13:41 compute-0 podman[309419]: 2025-10-01 14:13:41.726439984 +0000 UTC m=+0.217329591 container init 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:13:41 compute-0 ceph-mon[74802]: pgmap v2134: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:41 compute-0 podman[309419]: 2025-10-01 14:13:41.732572948 +0000 UTC m=+0.223462525 container start 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:13:41 compute-0 podman[309419]: 2025-10-01 14:13:41.753341818 +0000 UTC m=+0.244231415 container attach 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 14:13:42 compute-0 elated_goldstine[309436]: {
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:     "0": [
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:         {
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "devices": [
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "/dev/loop3"
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             ],
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_name": "ceph_lv0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_size": "21470642176",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "name": "ceph_lv0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "tags": {
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.cluster_name": "ceph",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.crush_device_class": "",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.encrypted": "0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.osd_id": "0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.type": "block",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.vdo": "0"
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             },
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "type": "block",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "vg_name": "ceph_vg0"
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:         }
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:     ],
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:     "1": [
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:         {
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "devices": [
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "/dev/loop4"
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             ],
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_name": "ceph_lv1",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_size": "21470642176",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "name": "ceph_lv1",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "tags": {
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.cluster_name": "ceph",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.crush_device_class": "",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.encrypted": "0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.osd_id": "1",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.type": "block",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.vdo": "0"
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             },
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "type": "block",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "vg_name": "ceph_vg1"
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:         }
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:     ],
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:     "2": [
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:         {
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "devices": [
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "/dev/loop5"
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             ],
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_name": "ceph_lv2",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_size": "21470642176",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "name": "ceph_lv2",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "tags": {
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.cluster_name": "ceph",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.crush_device_class": "",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.encrypted": "0",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.osd_id": "2",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.type": "block",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:                 "ceph.vdo": "0"
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             },
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "type": "block",
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:             "vg_name": "ceph_vg2"
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:         }
Oct 01 14:13:42 compute-0 elated_goldstine[309436]:     ]
Oct 01 14:13:42 compute-0 elated_goldstine[309436]: }
Oct 01 14:13:42 compute-0 systemd[1]: libpod-8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e.scope: Deactivated successfully.
Oct 01 14:13:42 compute-0 podman[309419]: 2025-10-01 14:13:42.505874498 +0000 UTC m=+0.996764125 container died 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 14:13:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d01020d600f066bd97405ae0db684b777afc00aed91f33f28cc2745fae74fe9-merged.mount: Deactivated successfully.
Oct 01 14:13:42 compute-0 podman[309419]: 2025-10-01 14:13:42.574445885 +0000 UTC m=+1.065335462 container remove 8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:13:42 compute-0 systemd[1]: libpod-conmon-8d99fbc577eaa1154bc0b24b10203ff6fdb5d266ca9edb3a0cc1a85d3c81422e.scope: Deactivated successfully.
Oct 01 14:13:42 compute-0 sudo[309312]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:42 compute-0 podman[309452]: 2025-10-01 14:13:42.644529189 +0000 UTC m=+0.080272119 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:13:42 compute-0 podman[309456]: 2025-10-01 14:13:42.644529159 +0000 UTC m=+0.066001816 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:13:42 compute-0 podman[309462]: 2025-10-01 14:13:42.665317499 +0000 UTC m=+0.082706206 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 01 14:13:42 compute-0 podman[309454]: 2025-10-01 14:13:42.665521356 +0000 UTC m=+0.095772532 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 14:13:42 compute-0 sudo[309520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:42 compute-0 sudo[309520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:42 compute-0 sudo[309520]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:42 compute-0 sudo[309558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:13:42 compute-0 sudo[309558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:42 compute-0 sudo[309558]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:42 compute-0 sudo[309583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:42 compute-0 sudo[309583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:42 compute-0 sudo[309583]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:42 compute-0 sudo[309608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:13:42 compute-0 sudo[309608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:43 compute-0 podman[309673]: 2025-10-01 14:13:43.208851324 +0000 UTC m=+0.060316166 container create 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:13:43 compute-0 systemd[1]: Started libpod-conmon-43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59.scope.
Oct 01 14:13:43 compute-0 podman[309673]: 2025-10-01 14:13:43.182080024 +0000 UTC m=+0.033544926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:13:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:13:43 compute-0 podman[309673]: 2025-10-01 14:13:43.312616079 +0000 UTC m=+0.164080981 container init 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:13:43 compute-0 podman[309673]: 2025-10-01 14:13:43.31991222 +0000 UTC m=+0.171377062 container start 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 14:13:43 compute-0 podman[309673]: 2025-10-01 14:13:43.324043951 +0000 UTC m=+0.175508833 container attach 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 01 14:13:43 compute-0 naughty_bardeen[309690]: 167 167
Oct 01 14:13:43 compute-0 systemd[1]: libpod-43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59.scope: Deactivated successfully.
Oct 01 14:13:43 compute-0 podman[309673]: 2025-10-01 14:13:43.328275525 +0000 UTC m=+0.179740347 container died 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fca902e5fb984f8d03fcefbbb705e7010e5dc90948dd7c9aa6697c41b5ea42c-merged.mount: Deactivated successfully.
Oct 01 14:13:43 compute-0 podman[309673]: 2025-10-01 14:13:43.374795633 +0000 UTC m=+0.226260445 container remove 43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bardeen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:13:43 compute-0 systemd[1]: libpod-conmon-43cf6031a261c067d61b21660f8bd463455c01ff1c1fd9a78d05ecc10cb1ac59.scope: Deactivated successfully.
Oct 01 14:13:43 compute-0 podman[309714]: 2025-10-01 14:13:43.551221154 +0000 UTC m=+0.053811150 container create 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 01 14:13:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:43 compute-0 systemd[1]: Started libpod-conmon-6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132.scope.
Oct 01 14:13:43 compute-0 podman[309714]: 2025-10-01 14:13:43.52340017 +0000 UTC m=+0.025990226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:13:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:13:43 compute-0 podman[309714]: 2025-10-01 14:13:43.65601367 +0000 UTC m=+0.158603716 container init 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:13:43 compute-0 podman[309714]: 2025-10-01 14:13:43.662522197 +0000 UTC m=+0.165112153 container start 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:13:43 compute-0 podman[309714]: 2025-10-01 14:13:43.665953376 +0000 UTC m=+0.168543372 container attach 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:13:43 compute-0 ceph-mon[74802]: pgmap v2135: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]: {
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "osd_id": 0,
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "type": "bluestore"
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:     },
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "osd_id": 2,
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "type": "bluestore"
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:     },
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "osd_id": 1,
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:         "type": "bluestore"
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]:     }
Oct 01 14:13:44 compute-0 hungry_dewdney[309731]: }
Oct 01 14:13:44 compute-0 systemd[1]: libpod-6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132.scope: Deactivated successfully.
Oct 01 14:13:44 compute-0 systemd[1]: libpod-6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132.scope: Consumed 1.076s CPU time.
Oct 01 14:13:44 compute-0 podman[309714]: 2025-10-01 14:13:44.727812025 +0000 UTC m=+1.230402081 container died 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:13:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a053d621c435c9a9ec923e2c642fdfaa95fcaee5cf452fc5a74db567d6fd301b-merged.mount: Deactivated successfully.
Oct 01 14:13:44 compute-0 podman[309714]: 2025-10-01 14:13:44.797707574 +0000 UTC m=+1.300297570 container remove 6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct 01 14:13:44 compute-0 systemd[1]: libpod-conmon-6ad9b4815926c55770966cb2841bf22d60686a861253d4315300c1e3254f5132.scope: Deactivated successfully.
Oct 01 14:13:44 compute-0 sudo[309608]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:13:44 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:44 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:13:44 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:44 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 889363bc-5063-438f-bd19-3c8aa4857f15 does not exist
Oct 01 14:13:44 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev b0e0612a-7def-494a-80b7-e0942d2e4bb4 does not exist
Oct 01 14:13:44 compute-0 sudo[309776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:13:44 compute-0 sudo[309776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:44 compute-0 sudo[309776]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:45 compute-0 sudo[309801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:13:45 compute-0 sudo[309801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:13:45 compute-0 sudo[309801]: pam_unix(sudo:session): session closed for user root
Oct 01 14:13:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:45 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:45 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:13:45 compute-0 ceph-mon[74802]: pgmap v2136: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:47 compute-0 ceph-mon[74802]: pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:13:47
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'images', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control']
Oct 01 14:13:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:13:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:13:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:13:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:49 compute-0 ceph-mon[74802]: pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:51 compute-0 ceph-mon[74802]: pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:53 compute-0 ceph-mon[74802]: pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:13:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3477478826' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:13:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:13:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3477478826' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:13:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3477478826' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:13:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3477478826' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:13:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:56 compute-0 ceph-mon[74802]: pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:57 compute-0 ceph-mon[74802]: pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:13:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:13:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:13:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:13:59 compute-0 ceph-mon[74802]: pgmap v2143: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:01 compute-0 ceph-mon[74802]: pgmap v2144: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:03 compute-0 ceph-mon[74802]: pgmap v2145: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:05 compute-0 ceph-mon[74802]: pgmap v2146: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:07 compute-0 ceph-mon[74802]: pgmap v2147: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:09 compute-0 ceph-mon[74802]: pgmap v2148: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:11 compute-0 ceph-mon[74802]: pgmap v2149: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:14:12.339 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:14:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:14:12.340 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:14:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:14:12.340 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:14:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:13 compute-0 podman[309829]: 2025-10-01 14:14:13.512463459 +0000 UTC m=+0.059512570 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:14:13 compute-0 podman[309827]: 2025-10-01 14:14:13.530445189 +0000 UTC m=+0.081453507 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:14:13 compute-0 podman[309828]: 2025-10-01 14:14:13.530547633 +0000 UTC m=+0.077539823 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923)
Oct 01 14:14:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:13 compute-0 podman[309826]: 2025-10-01 14:14:13.632094067 +0000 UTC m=+0.177977842 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 14:14:13 compute-0 ceph-mon[74802]: pgmap v2150: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:15 compute-0 ceph-mon[74802]: pgmap v2151: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:17 compute-0 ceph-mon[74802]: pgmap v2152: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:14:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:14:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:19 compute-0 ceph-mon[74802]: pgmap v2153: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:21 compute-0 ceph-mon[74802]: pgmap v2154: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:22 compute-0 nova_compute[260022]: 2025-10-01 14:14:22.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:14:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:23 compute-0 ceph-mon[74802]: pgmap v2155: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:25 compute-0 ceph-mon[74802]: pgmap v2156: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:27 compute-0 ceph-mon[74802]: pgmap v2157: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:29 compute-0 ceph-mon[74802]: pgmap v2158: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:31 compute-0 nova_compute[260022]: 2025-10-01 14:14:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:14:31 compute-0 nova_compute[260022]: 2025-10-01 14:14:31.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:14:31 compute-0 nova_compute[260022]: 2025-10-01 14:14:31.381 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:14:31 compute-0 nova_compute[260022]: 2025-10-01 14:14:31.381 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:14:31 compute-0 nova_compute[260022]: 2025-10-01 14:14:31.381 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:14:31 compute-0 nova_compute[260022]: 2025-10-01 14:14:31.382 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:14:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:31 compute-0 ceph-mon[74802]: pgmap v2159: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:14:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812019893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:14:31 compute-0 nova_compute[260022]: 2025-10-01 14:14:31.902 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.096 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.098 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5028MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.098 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.099 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.187 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.202 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.203 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.203 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.260 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:14:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:14:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2851974264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.673 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.680 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:14:32 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2812019893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:14:32 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2851974264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.695 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.696 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:14:32 compute-0 nova_compute[260022]: 2025-10-01 14:14:32.696 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:14:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:33 compute-0 ceph-mon[74802]: pgmap v2160: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:34 compute-0 nova_compute[260022]: 2025-10-01 14:14:34.698 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:14:34 compute-0 nova_compute[260022]: 2025-10-01 14:14:34.699 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:14:34 compute-0 nova_compute[260022]: 2025-10-01 14:14:34.699 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:14:35 compute-0 nova_compute[260022]: 2025-10-01 14:14:35.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:14:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:35 compute-0 ceph-mon[74802]: pgmap v2161: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:36 compute-0 nova_compute[260022]: 2025-10-01 14:14:36.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:14:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:37 compute-0 ceph-mon[74802]: pgmap v2162: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:39 compute-0 ceph-mon[74802]: pgmap v2163: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:40 compute-0 nova_compute[260022]: 2025-10-01 14:14:40.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:14:40 compute-0 nova_compute[260022]: 2025-10-01 14:14:40.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:14:40 compute-0 nova_compute[260022]: 2025-10-01 14:14:40.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:14:40 compute-0 nova_compute[260022]: 2025-10-01 14:14:40.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:14:40 compute-0 nova_compute[260022]: 2025-10-01 14:14:40.362 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:14:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:41 compute-0 ceph-mon[74802]: pgmap v2164: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:42 compute-0 nova_compute[260022]: 2025-10-01 14:14:42.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:14:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:43 compute-0 ceph-mon[74802]: pgmap v2165: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:44 compute-0 podman[309951]: 2025-10-01 14:14:44.544257321 +0000 UTC m=+0.098626702 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 01 14:14:44 compute-0 podman[309952]: 2025-10-01 14:14:44.550774418 +0000 UTC m=+0.089788591 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 14:14:44 compute-0 podman[309957]: 2025-10-01 14:14:44.550741507 +0000 UTC m=+0.079666730 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:14:44 compute-0 podman[309959]: 2025-10-01 14:14:44.581596507 +0000 UTC m=+0.104473858 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 01 14:14:45 compute-0 sudo[310034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:45 compute-0 sudo[310034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:45 compute-0 sudo[310034]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:45 compute-0 sudo[310059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:14:45 compute-0 sudo[310059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:45 compute-0 sudo[310059]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:45 compute-0 sudo[310084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:45 compute-0 sudo[310084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:45 compute-0 sudo[310084]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:45 compute-0 sudo[310109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 01 14:14:45 compute-0 sudo[310109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:45 compute-0 ceph-mon[74802]: pgmap v2166: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:45 compute-0 podman[310207]: 2025-10-01 14:14:45.977682597 +0000 UTC m=+0.060110259 container exec dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:14:46 compute-0 podman[310207]: 2025-10-01 14:14:46.089553718 +0000 UTC m=+0.171981390 container exec_died dfadbb96d7d51fa7c2ed721145616ced603621ae12bae5125feaf16532ef7320 (image=quay.io/ceph/ceph:v18, name=ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:14:46 compute-0 sudo[310109]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:14:46 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:14:46 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:46 compute-0 sudo[310367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:46 compute-0 sudo[310367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:46 compute-0 sudo[310367]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:47 compute-0 sudo[310392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:14:47 compute-0 sudo[310392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:47 compute-0 sudo[310392]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:47 compute-0 sudo[310417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:47 compute-0 sudo[310417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:47 compute-0 sudo[310417]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:47 compute-0 sudo[310442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:14:47 compute-0 sudo[310442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:47 compute-0 sudo[310442]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:14:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:14:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:14:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:14:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:14:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 52c76f9a-3871-40d9-a7f8-1bdd974c7932 does not exist
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 0fe235ba-dd94-4041-872f-97c1d70d21e2 does not exist
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 71e949d7-ecf6-452d-8e46-e069eb593ad2 does not exist
Oct 01 14:14:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:14:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:14:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:14:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:14:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:14:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:14:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:47 compute-0 ceph-mon[74802]: pgmap v2167: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:14:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:14:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:14:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:14:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:14:47 compute-0 sudo[310497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:47 compute-0 sudo[310497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:47 compute-0 sudo[310497]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:14:47
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Oct 01 14:14:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:14:47 compute-0 sudo[310522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:14:47 compute-0 sudo[310522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:47 compute-0 sudo[310522]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.021356) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088021398, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1680, "num_deletes": 256, "total_data_size": 2688392, "memory_usage": 2735312, "flush_reason": "Manual Compaction"}
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct 01 14:14:48 compute-0 sudo[310547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:48 compute-0 sudo[310547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088039421, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 2629505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42059, "largest_seqno": 43738, "table_properties": {"data_size": 2621673, "index_size": 4711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15871, "raw_average_key_size": 19, "raw_value_size": 2605966, "raw_average_value_size": 3257, "num_data_blocks": 210, "num_entries": 800, "num_filter_entries": 800, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759327917, "oldest_key_time": 1759327917, "file_creation_time": 1759328088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 18132 microseconds, and 7125 cpu microseconds.
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:14:48 compute-0 sudo[310547]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.039485) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 2629505 bytes OK
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.039509) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.041270) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.041289) EVENT_LOG_v1 {"time_micros": 1759328088041282, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.041310) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2681128, prev total WAL file size 2681128, number of live WAL files 2.
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.042665) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353131' seq:72057594037927935, type:22 .. '6C6F676D0031373632' seq:0, type:0; will stop at (end)
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(2567KB)], [98(8180KB)]
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088042768, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11006215, "oldest_snapshot_seqno": -1}
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6154 keys, 10902608 bytes, temperature: kUnknown
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088096917, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10902608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10858792, "index_size": 27322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 158838, "raw_average_key_size": 25, "raw_value_size": 10744680, "raw_average_value_size": 1745, "num_data_blocks": 1097, "num_entries": 6154, "num_filter_entries": 6154, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.097183) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10902608 bytes
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.098467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.9 rd, 201.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(8.3) write-amplify(4.1) OK, records in: 6682, records dropped: 528 output_compression: NoCompression
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.098486) EVENT_LOG_v1 {"time_micros": 1759328088098477, "job": 58, "event": "compaction_finished", "compaction_time_micros": 54247, "compaction_time_cpu_micros": 28357, "output_level": 6, "num_output_files": 1, "total_output_size": 10902608, "num_input_records": 6682, "num_output_records": 6154, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088099284, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328088101037, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.042476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:14:48 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:14:48.101123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:14:48 compute-0 sudo[310572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:14:48 compute-0 sudo[310572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:14:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:14:48 compute-0 podman[310636]: 2025-10-01 14:14:48.495372943 +0000 UTC m=+0.058088816 container create 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:14:48 compute-0 systemd[1]: Started libpod-conmon-407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26.scope.
Oct 01 14:14:48 compute-0 podman[310636]: 2025-10-01 14:14:48.463555383 +0000 UTC m=+0.026271336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:14:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:14:48 compute-0 podman[310636]: 2025-10-01 14:14:48.603905898 +0000 UTC m=+0.166621811 container init 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:14:48 compute-0 podman[310636]: 2025-10-01 14:14:48.61307169 +0000 UTC m=+0.175787573 container start 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 01 14:14:48 compute-0 podman[310636]: 2025-10-01 14:14:48.616021053 +0000 UTC m=+0.178736936 container attach 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 14:14:48 compute-0 agitated_khayyam[310652]: 167 167
Oct 01 14:14:48 compute-0 systemd[1]: libpod-407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26.scope: Deactivated successfully.
Oct 01 14:14:48 compute-0 conmon[310652]: conmon 407e8ffc50cbb689e47a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26.scope/container/memory.events
Oct 01 14:14:48 compute-0 podman[310636]: 2025-10-01 14:14:48.621383193 +0000 UTC m=+0.184099056 container died 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 14:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e3eaad66725632a2821b857d6f3b48177ccf703a1495feef549d171f27e68a-merged.mount: Deactivated successfully.
Oct 01 14:14:48 compute-0 podman[310636]: 2025-10-01 14:14:48.677191945 +0000 UTC m=+0.239907858 container remove 407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct 01 14:14:48 compute-0 systemd[1]: libpod-conmon-407e8ffc50cbb689e47a1800d2d1d62bb510b796bbd167a2ef193e78c36cac26.scope: Deactivated successfully.
Oct 01 14:14:48 compute-0 podman[310677]: 2025-10-01 14:14:48.852443248 +0000 UTC m=+0.046043872 container create b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 14:14:48 compute-0 systemd[1]: Started libpod-conmon-b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30.scope.
Oct 01 14:14:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:48 compute-0 podman[310677]: 2025-10-01 14:14:48.834084196 +0000 UTC m=+0.027684840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:14:48 compute-0 podman[310677]: 2025-10-01 14:14:48.940586377 +0000 UTC m=+0.134187061 container init b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 14:14:48 compute-0 podman[310677]: 2025-10-01 14:14:48.949943993 +0000 UTC m=+0.143544637 container start b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 01 14:14:48 compute-0 podman[310677]: 2025-10-01 14:14:48.953685732 +0000 UTC m=+0.147286386 container attach b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 14:14:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:49 compute-0 ceph-mon[74802]: pgmap v2168: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:50 compute-0 sad_faraday[310693]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:14:50 compute-0 sad_faraday[310693]: --> relative data size: 1.0
Oct 01 14:14:50 compute-0 sad_faraday[310693]: --> All data devices are unavailable
Oct 01 14:14:50 compute-0 systemd[1]: libpod-b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30.scope: Deactivated successfully.
Oct 01 14:14:50 compute-0 systemd[1]: libpod-b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30.scope: Consumed 1.190s CPU time.
Oct 01 14:14:50 compute-0 podman[310723]: 2025-10-01 14:14:50.24355378 +0000 UTC m=+0.037788930 container died b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0583e970e9fd6e310194834ea8048d626759ef9d42924acedfd3a3c5b8d2cebb-merged.mount: Deactivated successfully.
Oct 01 14:14:50 compute-0 podman[310723]: 2025-10-01 14:14:50.424304219 +0000 UTC m=+0.218539339 container remove b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:14:50 compute-0 systemd[1]: libpod-conmon-b2cb602cab9dd053cfe8a98adc70a5deada6011da336ad610775c891be6cce30.scope: Deactivated successfully.
Oct 01 14:14:50 compute-0 sudo[310572]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:50 compute-0 sudo[310738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:50 compute-0 sudo[310738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:50 compute-0 sudo[310738]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:50 compute-0 sudo[310763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:14:50 compute-0 sudo[310763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:50 compute-0 sudo[310763]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:50 compute-0 sudo[310788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:50 compute-0 sudo[310788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:50 compute-0 sudo[310788]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:50 compute-0 sudo[310813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:14:50 compute-0 sudo[310813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:51 compute-0 podman[310877]: 2025-10-01 14:14:51.159330992 +0000 UTC m=+0.069005531 container create 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:14:51 compute-0 systemd[1]: Started libpod-conmon-483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086.scope.
Oct 01 14:14:51 compute-0 podman[310877]: 2025-10-01 14:14:51.129120203 +0000 UTC m=+0.038794782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:14:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:14:51 compute-0 podman[310877]: 2025-10-01 14:14:51.262212218 +0000 UTC m=+0.171886737 container init 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:14:51 compute-0 podman[310877]: 2025-10-01 14:14:51.274906281 +0000 UTC m=+0.184580780 container start 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 01 14:14:51 compute-0 podman[310877]: 2025-10-01 14:14:51.278421913 +0000 UTC m=+0.188096462 container attach 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:14:51 compute-0 focused_sinoussi[310893]: 167 167
Oct 01 14:14:51 compute-0 podman[310877]: 2025-10-01 14:14:51.284064532 +0000 UTC m=+0.193739031 container died 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 01 14:14:51 compute-0 systemd[1]: libpod-483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086.scope: Deactivated successfully.
Oct 01 14:14:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e817bd3c98806629bb6db508293d6bef4e53b1858dbe11ed0866ebb1df3e8dcd-merged.mount: Deactivated successfully.
Oct 01 14:14:51 compute-0 podman[310877]: 2025-10-01 14:14:51.325663613 +0000 UTC m=+0.235338122 container remove 483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:14:51 compute-0 systemd[1]: libpod-conmon-483811e899b6bb6a712ec84ba198691572ab9d8360fe5710b500946916802086.scope: Deactivated successfully.
Oct 01 14:14:51 compute-0 podman[310916]: 2025-10-01 14:14:51.491232959 +0000 UTC m=+0.043933716 container create ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 14:14:51 compute-0 systemd[1]: Started libpod-conmon-ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49.scope.
Oct 01 14:14:51 compute-0 podman[310916]: 2025-10-01 14:14:51.470312665 +0000 UTC m=+0.023013412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:14:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:51 compute-0 podman[310916]: 2025-10-01 14:14:51.591352797 +0000 UTC m=+0.144053554 container init ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 14:14:51 compute-0 podman[310916]: 2025-10-01 14:14:51.598544135 +0000 UTC m=+0.151244862 container start ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 14:14:51 compute-0 podman[310916]: 2025-10-01 14:14:51.601433787 +0000 UTC m=+0.154134514 container attach ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:14:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:51 compute-0 ceph-mon[74802]: pgmap v2169: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:52 compute-0 competent_albattani[310933]: {
Oct 01 14:14:52 compute-0 competent_albattani[310933]:     "0": [
Oct 01 14:14:52 compute-0 competent_albattani[310933]:         {
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "devices": [
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "/dev/loop3"
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             ],
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_name": "ceph_lv0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_size": "21470642176",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "name": "ceph_lv0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "tags": {
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.cluster_name": "ceph",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.crush_device_class": "",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.encrypted": "0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.osd_id": "0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.type": "block",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.vdo": "0"
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             },
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "type": "block",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "vg_name": "ceph_vg0"
Oct 01 14:14:52 compute-0 competent_albattani[310933]:         }
Oct 01 14:14:52 compute-0 competent_albattani[310933]:     ],
Oct 01 14:14:52 compute-0 competent_albattani[310933]:     "1": [
Oct 01 14:14:52 compute-0 competent_albattani[310933]:         {
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "devices": [
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "/dev/loop4"
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             ],
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_name": "ceph_lv1",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_size": "21470642176",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "name": "ceph_lv1",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "tags": {
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.cluster_name": "ceph",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.crush_device_class": "",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.encrypted": "0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.osd_id": "1",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.type": "block",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.vdo": "0"
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             },
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "type": "block",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "vg_name": "ceph_vg1"
Oct 01 14:14:52 compute-0 competent_albattani[310933]:         }
Oct 01 14:14:52 compute-0 competent_albattani[310933]:     ],
Oct 01 14:14:52 compute-0 competent_albattani[310933]:     "2": [
Oct 01 14:14:52 compute-0 competent_albattani[310933]:         {
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "devices": [
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "/dev/loop5"
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             ],
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_name": "ceph_lv2",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_size": "21470642176",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "name": "ceph_lv2",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "tags": {
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.cluster_name": "ceph",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.crush_device_class": "",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.encrypted": "0",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.osd_id": "2",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.type": "block",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:                 "ceph.vdo": "0"
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             },
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "type": "block",
Oct 01 14:14:52 compute-0 competent_albattani[310933]:             "vg_name": "ceph_vg2"
Oct 01 14:14:52 compute-0 competent_albattani[310933]:         }
Oct 01 14:14:52 compute-0 competent_albattani[310933]:     ]
Oct 01 14:14:52 compute-0 competent_albattani[310933]: }
Oct 01 14:14:52 compute-0 systemd[1]: libpod-ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49.scope: Deactivated successfully.
Oct 01 14:14:52 compute-0 podman[310916]: 2025-10-01 14:14:52.389058471 +0000 UTC m=+0.941759198 container died ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 14:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3f29ba02e30ce0deec981d07ca9fd0491b8fcd9fe1d38f284daf5b7b7fad853-merged.mount: Deactivated successfully.
Oct 01 14:14:52 compute-0 podman[310916]: 2025-10-01 14:14:52.451722301 +0000 UTC m=+1.004423028 container remove ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 14:14:52 compute-0 systemd[1]: libpod-conmon-ffd4395ac4c89355abb33b86d045848077bf554390fce8c51cce3c08ab425e49.scope: Deactivated successfully.
Oct 01 14:14:52 compute-0 sudo[310813]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:52 compute-0 sudo[310952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:52 compute-0 sudo[310952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:52 compute-0 sudo[310952]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:52 compute-0 sudo[310977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:14:52 compute-0 sudo[310977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:52 compute-0 sudo[310977]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:52 compute-0 sudo[311002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:52 compute-0 sudo[311002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:52 compute-0 sudo[311002]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:52 compute-0 sudo[311027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:14:52 compute-0 sudo[311027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:53 compute-0 podman[311092]: 2025-10-01 14:14:53.204003082 +0000 UTC m=+0.084085830 container create 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:14:53 compute-0 systemd[1]: Started libpod-conmon-481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987.scope.
Oct 01 14:14:53 compute-0 podman[311092]: 2025-10-01 14:14:53.161970188 +0000 UTC m=+0.042053006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:14:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:14:53 compute-0 podman[311092]: 2025-10-01 14:14:53.286517121 +0000 UTC m=+0.166599899 container init 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:14:53 compute-0 podman[311092]: 2025-10-01 14:14:53.296814759 +0000 UTC m=+0.176897487 container start 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:14:53 compute-0 youthful_tharp[311109]: 167 167
Oct 01 14:14:53 compute-0 systemd[1]: libpod-481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987.scope: Deactivated successfully.
Oct 01 14:14:53 compute-0 podman[311092]: 2025-10-01 14:14:53.302496529 +0000 UTC m=+0.182579307 container attach 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:14:53 compute-0 podman[311092]: 2025-10-01 14:14:53.303042406 +0000 UTC m=+0.183125144 container died 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-33459e658b7ccc075fe6a8ce562817f0fa40c63dc50e26e257d21ecf3153d52b-merged.mount: Deactivated successfully.
Oct 01 14:14:53 compute-0 podman[311092]: 2025-10-01 14:14:53.342584891 +0000 UTC m=+0.222667659 container remove 481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_tharp, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct 01 14:14:53 compute-0 systemd[1]: libpod-conmon-481e89bd64a200ef263b67bb867188b05436d2a252577fd991cfabed6bf3f987.scope: Deactivated successfully.
Oct 01 14:14:53 compute-0 podman[311133]: 2025-10-01 14:14:53.518851607 +0000 UTC m=+0.043538553 container create a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:14:53 compute-0 systemd[1]: Started libpod-conmon-a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4.scope.
Oct 01 14:14:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:53 compute-0 podman[311133]: 2025-10-01 14:14:53.49939906 +0000 UTC m=+0.024086046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:14:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:53 compute-0 podman[311133]: 2025-10-01 14:14:53.612534272 +0000 UTC m=+0.137221308 container init a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 14:14:53 compute-0 podman[311133]: 2025-10-01 14:14:53.626924658 +0000 UTC m=+0.151611644 container start a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:14:53 compute-0 podman[311133]: 2025-10-01 14:14:53.631202284 +0000 UTC m=+0.155889260 container attach a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:14:53 compute-0 ceph-mon[74802]: pgmap v2170: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:54 compute-0 quizzical_moser[311149]: {
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "osd_id": 0,
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "type": "bluestore"
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:     },
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "osd_id": 2,
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "type": "bluestore"
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:     },
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "osd_id": 1,
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:         "type": "bluestore"
Oct 01 14:14:54 compute-0 quizzical_moser[311149]:     }
Oct 01 14:14:54 compute-0 quizzical_moser[311149]: }
Oct 01 14:14:54 compute-0 systemd[1]: libpod-a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4.scope: Deactivated successfully.
Oct 01 14:14:54 compute-0 podman[311133]: 2025-10-01 14:14:54.730932555 +0000 UTC m=+1.255619591 container died a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 14:14:54 compute-0 systemd[1]: libpod-a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4.scope: Consumed 1.110s CPU time.
Oct 01 14:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e655704f0b78cd98a99c2fd7c2e141ca64c9339edb8ab425269c59eb9aed54a-merged.mount: Deactivated successfully.
Oct 01 14:14:54 compute-0 podman[311133]: 2025-10-01 14:14:54.79689369 +0000 UTC m=+1.321580636 container remove a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 14:14:54 compute-0 systemd[1]: libpod-conmon-a6ec25fa3647e858e1f18a14f36b1aca65a151fadc84638fb0cc6300cd65e6a4.scope: Deactivated successfully.
Oct 01 14:14:54 compute-0 sudo[311027]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:14:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:14:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:54 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9782c2e8-8668-4629-bcf0-4b13370fea0f does not exist
Oct 01 14:14:54 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev eed95999-88df-4c66-a616-78fb5d839c26 does not exist
Oct 01 14:14:54 compute-0 sudo[311195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:14:54 compute-0 sudo[311195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:54 compute-0 sudo[311195]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:55 compute-0 sudo[311220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:14:55 compute-0 sudo[311220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:14:55 compute-0 sudo[311220]: pam_unix(sudo:session): session closed for user root
Oct 01 14:14:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:14:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1741173761' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:14:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:14:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1741173761' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:14:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:14:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1741173761' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:14:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1741173761' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:14:55 compute-0 ceph-mon[74802]: pgmap v2171: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:57 compute-0 ceph-mon[74802]: pgmap v2172: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:14:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:14:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:14:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:14:59 compute-0 ceph-mon[74802]: pgmap v2173: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:01 compute-0 ceph-mon[74802]: pgmap v2174: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:03 compute-0 ceph-mon[74802]: pgmap v2175: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:05 compute-0 nova_compute[260022]: 2025-10-01 14:15:05.342 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:05 compute-0 ceph-mon[74802]: pgmap v2176: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:07 compute-0 ceph-mon[74802]: pgmap v2177: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:09 compute-0 ceph-mon[74802]: pgmap v2178: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:11 compute-0 ceph-mon[74802]: pgmap v2179: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:15:12.340 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:15:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:15:12.341 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:15:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:15:12.341 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:15:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:13 compute-0 ceph-mon[74802]: pgmap v2180: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:15 compute-0 podman[311248]: 2025-10-01 14:15:15.554603991 +0000 UTC m=+0.075700245 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 14:15:15 compute-0 podman[311246]: 2025-10-01 14:15:15.55520014 +0000 UTC m=+0.087669075 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 01 14:15:15 compute-0 podman[311247]: 2025-10-01 14:15:15.555632694 +0000 UTC m=+0.090224046 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 01 14:15:15 compute-0 podman[311245]: 2025-10-01 14:15:15.589708696 +0000 UTC m=+0.123838943 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 14:15:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:15 compute-0 ceph-mon[74802]: pgmap v2181: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:17 compute-0 ceph-mon[74802]: pgmap v2182: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:15:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:15:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:19 compute-0 ceph-mon[74802]: pgmap v2183: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:21 compute-0 nova_compute[260022]: 2025-10-01 14:15:21.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:21 compute-0 ceph-mon[74802]: pgmap v2184: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:23 compute-0 ceph-mon[74802]: pgmap v2185: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:24 compute-0 nova_compute[260022]: 2025-10-01 14:15:24.374 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:24 compute-0 sshd-session[311327]: Accepted publickey for zuul from 192.168.122.30 port 57260 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 14:15:24 compute-0 systemd-logind[818]: New session 53 of user zuul.
Oct 01 14:15:24 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct 01 14:15:24 compute-0 sshd-session[311327]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 14:15:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:25 compute-0 ceph-mon[74802]: pgmap v2186: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:26 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:15:26.608 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:15:26 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:15:26.611 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:15:26 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:15:26.615 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:15:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:27 compute-0 ceph-mon[74802]: pgmap v2187: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:29 compute-0 ceph-mon[74802]: pgmap v2188: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:30 compute-0 sshd-session[311330]: Connection closed by 192.168.122.30 port 57260
Oct 01 14:15:30 compute-0 sshd-session[311327]: pam_unix(sshd:session): session closed for user zuul
Oct 01 14:15:30 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct 01 14:15:30 compute-0 systemd-logind[818]: Session 53 logged out. Waiting for processes to exit.
Oct 01 14:15:30 compute-0 systemd-logind[818]: Removed session 53.
Oct 01 14:15:31 compute-0 nova_compute[260022]: 2025-10-01 14:15:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:31 compute-0 nova_compute[260022]: 2025-10-01 14:15:31.374 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:15:31 compute-0 nova_compute[260022]: 2025-10-01 14:15:31.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:15:31 compute-0 nova_compute[260022]: 2025-10-01 14:15:31.375 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:15:31 compute-0 nova_compute[260022]: 2025-10-01 14:15:31.375 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:15:31 compute-0 nova_compute[260022]: 2025-10-01 14:15:31.376 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:15:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:31 compute-0 ceph-mon[74802]: pgmap v2189: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:15:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2713596258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:15:31 compute-0 nova_compute[260022]: 2025-10-01 14:15:31.828 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.021 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.022 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5029MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.023 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.023 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.098 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.112 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.112 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.112 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.241 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:15:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:15:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306180590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:15:32 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2713596258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:15:32 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/306180590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.707 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.714 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.734 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.735 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:15:32 compute-0 nova_compute[260022]: 2025-10-01 14:15:32.735 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:15:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:33 compute-0 ceph-mon[74802]: pgmap v2190: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:35 compute-0 ceph-mon[74802]: pgmap v2191: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:36 compute-0 nova_compute[260022]: 2025-10-01 14:15:36.731 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:36 compute-0 nova_compute[260022]: 2025-10-01 14:15:36.731 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:36 compute-0 nova_compute[260022]: 2025-10-01 14:15:36.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:36 compute-0 nova_compute[260022]: 2025-10-01 14:15:36.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:36 compute-0 nova_compute[260022]: 2025-10-01 14:15:36.732 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:15:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:37 compute-0 ceph-mon[74802]: pgmap v2192: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.700766) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139700805, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 639, "num_deletes": 251, "total_data_size": 783368, "memory_usage": 796072, "flush_reason": "Manual Compaction"}
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139708418, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 776533, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43739, "largest_seqno": 44377, "table_properties": {"data_size": 773067, "index_size": 1374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7701, "raw_average_key_size": 19, "raw_value_size": 766242, "raw_average_value_size": 1910, "num_data_blocks": 61, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759328089, "oldest_key_time": 1759328089, "file_creation_time": 1759328139, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 7698 microseconds, and 4452 cpu microseconds.
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.708464) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 776533 bytes OK
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.708485) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.712856) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.712875) EVENT_LOG_v1 {"time_micros": 1759328139712869, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.712893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 779964, prev total WAL file size 781121, number of live WAL files 2.
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.713527) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(758KB)], [101(10MB)]
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139713579, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11679141, "oldest_snapshot_seqno": -1}
Oct 01 14:15:39 compute-0 ceph-mon[74802]: pgmap v2193: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6043 keys, 9920838 bytes, temperature: kUnknown
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139788318, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9920838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9878681, "index_size": 25919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 157187, "raw_average_key_size": 26, "raw_value_size": 9767411, "raw_average_value_size": 1616, "num_data_blocks": 1031, "num_entries": 6043, "num_filter_entries": 6043, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328139, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.788837) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9920838 bytes
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.790766) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.0 rd, 132.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.4 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(27.8) write-amplify(12.8) OK, records in: 6555, records dropped: 512 output_compression: NoCompression
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.790814) EVENT_LOG_v1 {"time_micros": 1759328139790794, "job": 60, "event": "compaction_finished", "compaction_time_micros": 74882, "compaction_time_cpu_micros": 40617, "output_level": 6, "num_output_files": 1, "total_output_size": 9920838, "num_input_records": 6555, "num_output_records": 6043, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139791360, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328139795201, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.713474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:15:39 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:15:39.795339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:15:41 compute-0 nova_compute[260022]: 2025-10-01 14:15:41.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:41 compute-0 nova_compute[260022]: 2025-10-01 14:15:41.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:15:41 compute-0 nova_compute[260022]: 2025-10-01 14:15:41.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:15:41 compute-0 nova_compute[260022]: 2025-10-01 14:15:41.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:15:41 compute-0 nova_compute[260022]: 2025-10-01 14:15:41.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:41 compute-0 ceph-mon[74802]: pgmap v2194: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:42 compute-0 nova_compute[260022]: 2025-10-01 14:15:42.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:43 compute-0 ceph-mon[74802]: pgmap v2195: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:45 compute-0 ceph-mon[74802]: pgmap v2196: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:46 compute-0 podman[311631]: 2025-10-01 14:15:46.55140749 +0000 UTC m=+0.085622258 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 01 14:15:46 compute-0 podman[311629]: 2025-10-01 14:15:46.551590856 +0000 UTC m=+0.096021719 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:15:46 compute-0 podman[311630]: 2025-10-01 14:15:46.567509252 +0000 UTC m=+0.104199459 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 01 14:15:46 compute-0 podman[311628]: 2025-10-01 14:15:46.603565647 +0000 UTC m=+0.150622173 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:47 compute-0 ceph-mon[74802]: pgmap v2197: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:15:47
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'backups', '.rgw.root', '.mgr', 'default.rgw.log']
Oct 01 14:15:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:15:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:15:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:15:48 compute-0 nova_compute[260022]: 2025-10-01 14:15:48.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:48 compute-0 nova_compute[260022]: 2025-10-01 14:15:48.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 14:15:48 compute-0 nova_compute[260022]: 2025-10-01 14:15:48.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 14:15:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:49 compute-0 ceph-mon[74802]: pgmap v2198: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:51 compute-0 ceph-mon[74802]: pgmap v2199: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:53 compute-0 ceph-mon[74802]: pgmap v2200: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:55 compute-0 sudo[311708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:15:55 compute-0 sudo[311708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:55 compute-0 sudo[311708]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:55 compute-0 sudo[311733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:15:55 compute-0 sudo[311733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:55 compute-0 sudo[311733]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:15:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2065959952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:15:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:15:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2065959952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:15:55 compute-0 sudo[311758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:15:55 compute-0 sudo[311758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:55 compute-0 sudo[311758]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2065959952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:15:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2065959952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:15:55 compute-0 nova_compute[260022]: 2025-10-01 14:15:55.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:15:55 compute-0 nova_compute[260022]: 2025-10-01 14:15:55.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 14:15:55 compute-0 sudo[311783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:15:55 compute-0 sudo[311783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:55 compute-0 sudo[311783]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:15:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:15:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:15:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:15:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:15:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:15:55 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 84d9faf8-d48e-4921-8fbc-3849b055031a does not exist
Oct 01 14:15:55 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c6d174fc-324b-425d-bb4e-eef9d32ae2a1 does not exist
Oct 01 14:15:55 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 29759e6d-c2d1-4b42-916a-3edea1626f73 does not exist
Oct 01 14:15:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:15:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:15:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:15:55 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:15:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:15:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:15:56 compute-0 sudo[311838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:15:56 compute-0 sudo[311838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:56 compute-0 sudo[311838]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:56 compute-0 sudo[311863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:15:56 compute-0 sudo[311863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:56 compute-0 sudo[311863]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:56 compute-0 sudo[311888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:15:56 compute-0 sudo[311888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:56 compute-0 sudo[311888]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:56 compute-0 sudo[311913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:15:56 compute-0 sudo[311913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:56 compute-0 ceph-mon[74802]: pgmap v2201: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:15:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:15:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:15:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:15:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:15:56 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:15:56 compute-0 podman[311978]: 2025-10-01 14:15:56.735529766 +0000 UTC m=+0.065302724 container create 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 14:15:56 compute-0 systemd[1]: Started libpod-conmon-445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7.scope.
Oct 01 14:15:56 compute-0 podman[311978]: 2025-10-01 14:15:56.700439032 +0000 UTC m=+0.030212080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:15:56 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:15:56 compute-0 podman[311978]: 2025-10-01 14:15:56.83897041 +0000 UTC m=+0.168743448 container init 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:15:56 compute-0 podman[311978]: 2025-10-01 14:15:56.850540566 +0000 UTC m=+0.180313535 container start 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:15:56 compute-0 podman[311978]: 2025-10-01 14:15:56.854504193 +0000 UTC m=+0.184277161 container attach 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:15:56 compute-0 elated_fermat[311994]: 167 167
Oct 01 14:15:56 compute-0 podman[311978]: 2025-10-01 14:15:56.856592899 +0000 UTC m=+0.186365867 container died 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 14:15:56 compute-0 systemd[1]: libpod-445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7.scope: Deactivated successfully.
Oct 01 14:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ea44bd2075d4453b53ec5efce2b92b09a22b60156ad3704a2e2443ee16cae42-merged.mount: Deactivated successfully.
Oct 01 14:15:56 compute-0 podman[311978]: 2025-10-01 14:15:56.922021766 +0000 UTC m=+0.251794734 container remove 445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:15:56 compute-0 systemd[1]: libpod-conmon-445b4619d58366457450d6f4308909239c68d02bc695c9f481f9c9a3b46e02c7.scope: Deactivated successfully.
Oct 01 14:15:57 compute-0 podman[312017]: 2025-10-01 14:15:57.135430441 +0000 UTC m=+0.072716039 container create cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 14:15:57 compute-0 systemd[1]: Started libpod-conmon-cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5.scope.
Oct 01 14:15:57 compute-0 podman[312017]: 2025-10-01 14:15:57.10481501 +0000 UTC m=+0.042100648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:15:57 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:15:57 compute-0 podman[312017]: 2025-10-01 14:15:57.225799039 +0000 UTC m=+0.163084627 container init cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:15:57 compute-0 podman[312017]: 2025-10-01 14:15:57.240101564 +0000 UTC m=+0.177387122 container start cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 14:15:57 compute-0 podman[312017]: 2025-10-01 14:15:57.244633008 +0000 UTC m=+0.181918566 container attach cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:57 compute-0 ceph-mon[74802]: pgmap v2202: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:15:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:15:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:15:58 compute-0 unruffled_lehmann[312034]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:15:58 compute-0 unruffled_lehmann[312034]: --> relative data size: 1.0
Oct 01 14:15:58 compute-0 unruffled_lehmann[312034]: --> All data devices are unavailable
Oct 01 14:15:58 compute-0 systemd[1]: libpod-cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5.scope: Deactivated successfully.
Oct 01 14:15:58 compute-0 podman[312017]: 2025-10-01 14:15:58.432064804 +0000 UTC m=+1.369350362 container died cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:15:58 compute-0 systemd[1]: libpod-cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5.scope: Consumed 1.132s CPU time.
Oct 01 14:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f7024d2f53ab3af25f9ab0a0f820176d5b9669efe4164915f783803658c084f-merged.mount: Deactivated successfully.
Oct 01 14:15:58 compute-0 podman[312017]: 2025-10-01 14:15:58.492757691 +0000 UTC m=+1.430043269 container remove cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct 01 14:15:58 compute-0 systemd[1]: libpod-conmon-cc191af1bd921fcd10a802be50d42e5c50ada8f4eed649078a03f0383e2a32a5.scope: Deactivated successfully.
Oct 01 14:15:58 compute-0 sudo[311913]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:58 compute-0 sudo[312078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:15:58 compute-0 sudo[312078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:58 compute-0 sudo[312078]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:58 compute-0 sudo[312103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:15:58 compute-0 sudo[312103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:58 compute-0 sudo[312103]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:58 compute-0 sudo[312128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:15:58 compute-0 sudo[312128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:58 compute-0 sudo[312128]: pam_unix(sudo:session): session closed for user root
Oct 01 14:15:58 compute-0 sudo[312153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:15:58 compute-0 sudo[312153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:15:59 compute-0 podman[312220]: 2025-10-01 14:15:59.215543286 +0000 UTC m=+0.051888098 container create 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 14:15:59 compute-0 systemd[1]: Started libpod-conmon-4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646.scope.
Oct 01 14:15:59 compute-0 podman[312220]: 2025-10-01 14:15:59.192759383 +0000 UTC m=+0.029104355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:15:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:15:59 compute-0 podman[312220]: 2025-10-01 14:15:59.315132558 +0000 UTC m=+0.151477410 container init 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 14:15:59 compute-0 podman[312220]: 2025-10-01 14:15:59.327317005 +0000 UTC m=+0.163661827 container start 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:15:59 compute-0 podman[312220]: 2025-10-01 14:15:59.331223568 +0000 UTC m=+0.167568410 container attach 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:15:59 compute-0 jovial_wescoff[312236]: 167 167
Oct 01 14:15:59 compute-0 systemd[1]: libpod-4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646.scope: Deactivated successfully.
Oct 01 14:15:59 compute-0 podman[312220]: 2025-10-01 14:15:59.336707852 +0000 UTC m=+0.173052704 container died 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e0cd50a5fe8ec3f1cd2074656bc5fa63b2a439b3bd527f62654780dfd5cbb21-merged.mount: Deactivated successfully.
Oct 01 14:15:59 compute-0 podman[312220]: 2025-10-01 14:15:59.389633712 +0000 UTC m=+0.225978524 container remove 4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wescoff, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:15:59 compute-0 systemd[1]: libpod-conmon-4a11572eff6b04951ffa2c89699a7da67eed7e23b557e64645a18c031533f646.scope: Deactivated successfully.
Oct 01 14:15:59 compute-0 podman[312261]: 2025-10-01 14:15:59.564907627 +0000 UTC m=+0.047470188 container create 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 14:15:59 compute-0 systemd[1]: Started libpod-conmon-0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795.scope.
Oct 01 14:15:59 compute-0 podman[312261]: 2025-10-01 14:15:59.540667078 +0000 UTC m=+0.023229619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:15:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:15:59 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:15:59 compute-0 podman[312261]: 2025-10-01 14:15:59.674059302 +0000 UTC m=+0.156621923 container init 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 01 14:15:59 compute-0 podman[312261]: 2025-10-01 14:15:59.688288644 +0000 UTC m=+0.170851195 container start 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:15:59 compute-0 podman[312261]: 2025-10-01 14:15:59.692932501 +0000 UTC m=+0.175495052 container attach 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:15:59 compute-0 ceph-mon[74802]: pgmap v2203: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]: {
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:     "0": [
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:         {
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "devices": [
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "/dev/loop3"
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             ],
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_name": "ceph_lv0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_size": "21470642176",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "name": "ceph_lv0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "tags": {
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.cluster_name": "ceph",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.crush_device_class": "",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.encrypted": "0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.osd_id": "0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.type": "block",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.vdo": "0"
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             },
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "type": "block",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "vg_name": "ceph_vg0"
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:         }
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:     ],
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:     "1": [
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:         {
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "devices": [
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "/dev/loop4"
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             ],
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_name": "ceph_lv1",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_size": "21470642176",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "name": "ceph_lv1",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "tags": {
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.cluster_name": "ceph",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.crush_device_class": "",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.encrypted": "0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.osd_id": "1",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.type": "block",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.vdo": "0"
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             },
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "type": "block",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "vg_name": "ceph_vg1"
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:         }
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:     ],
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:     "2": [
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:         {
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "devices": [
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "/dev/loop5"
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             ],
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_name": "ceph_lv2",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_size": "21470642176",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "name": "ceph_lv2",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "tags": {
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.cluster_name": "ceph",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.crush_device_class": "",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.encrypted": "0",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.osd_id": "2",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.type": "block",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:                 "ceph.vdo": "0"
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             },
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "type": "block",
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:             "vg_name": "ceph_vg2"
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:         }
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]:     ]
Oct 01 14:16:00 compute-0 kind_mcnulty[312278]: }
Oct 01 14:16:00 compute-0 systemd[1]: libpod-0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795.scope: Deactivated successfully.
Oct 01 14:16:00 compute-0 podman[312261]: 2025-10-01 14:16:00.516233568 +0000 UTC m=+0.998796119 container died 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 14:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9e609b72c07dcfa1225775cfa163a99263de040a113fccb01a4ac33c8fc2b3c-merged.mount: Deactivated successfully.
Oct 01 14:16:00 compute-0 podman[312261]: 2025-10-01 14:16:00.584326059 +0000 UTC m=+1.066888590 container remove 0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcnulty, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:16:00 compute-0 systemd[1]: libpod-conmon-0f74e90389c5d4e50cca081dcd8021c0b0fe708f41f947a21f5c4a5011a98795.scope: Deactivated successfully.
Oct 01 14:16:00 compute-0 sudo[312153]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:00 compute-0 sudo[312298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:16:00 compute-0 sudo[312298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:16:00 compute-0 sudo[312298]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:00 compute-0 sudo[312323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:16:00 compute-0 sudo[312323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:16:00 compute-0 sudo[312323]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:00 compute-0 sudo[312348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:16:00 compute-0 sudo[312348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:16:00 compute-0 sudo[312348]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:00 compute-0 sudo[312373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:16:00 compute-0 sudo[312373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:16:01 compute-0 podman[312438]: 2025-10-01 14:16:01.367603315 +0000 UTC m=+0.055413770 container create 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 01 14:16:01 compute-0 systemd[1]: Started libpod-conmon-91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321.scope.
Oct 01 14:16:01 compute-0 podman[312438]: 2025-10-01 14:16:01.341677882 +0000 UTC m=+0.029488397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:16:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:16:01 compute-0 podman[312438]: 2025-10-01 14:16:01.470694038 +0000 UTC m=+0.158504563 container init 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:16:01 compute-0 podman[312438]: 2025-10-01 14:16:01.482705469 +0000 UTC m=+0.170515934 container start 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:16:01 compute-0 podman[312438]: 2025-10-01 14:16:01.486957474 +0000 UTC m=+0.174767949 container attach 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 01 14:16:01 compute-0 optimistic_bhabha[312455]: 167 167
Oct 01 14:16:01 compute-0 systemd[1]: libpod-91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321.scope: Deactivated successfully.
Oct 01 14:16:01 compute-0 podman[312438]: 2025-10-01 14:16:01.491816849 +0000 UTC m=+0.179627314 container died 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 14:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7f49d5144a7e355ff91a2210217a2a971986597c3f6f7381efd52e534e72d2f-merged.mount: Deactivated successfully.
Oct 01 14:16:01 compute-0 podman[312438]: 2025-10-01 14:16:01.548179567 +0000 UTC m=+0.235990012 container remove 91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bhabha, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 01 14:16:01 compute-0 systemd[1]: libpod-conmon-91335b325c78d3841064035b0f587f7c4d3604a377c56157ada05ce9ff519321.scope: Deactivated successfully.
Oct 01 14:16:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:01 compute-0 ceph-mon[74802]: pgmap v2204: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:01 compute-0 podman[312478]: 2025-10-01 14:16:01.758636318 +0000 UTC m=+0.060292974 container create 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:16:01 compute-0 systemd[1]: Started libpod-conmon-5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65.scope.
Oct 01 14:16:01 compute-0 podman[312478]: 2025-10-01 14:16:01.736581418 +0000 UTC m=+0.038238084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:16:01 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:16:01 compute-0 podman[312478]: 2025-10-01 14:16:01.870660045 +0000 UTC m=+0.172316741 container init 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:16:01 compute-0 podman[312478]: 2025-10-01 14:16:01.884969999 +0000 UTC m=+0.186626625 container start 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 14:16:01 compute-0 podman[312478]: 2025-10-01 14:16:01.888809031 +0000 UTC m=+0.190465747 container attach 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:16:02 compute-0 agitated_borg[312495]: {
Oct 01 14:16:02 compute-0 agitated_borg[312495]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "osd_id": 0,
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "type": "bluestore"
Oct 01 14:16:02 compute-0 agitated_borg[312495]:     },
Oct 01 14:16:02 compute-0 agitated_borg[312495]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "osd_id": 2,
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "type": "bluestore"
Oct 01 14:16:02 compute-0 agitated_borg[312495]:     },
Oct 01 14:16:02 compute-0 agitated_borg[312495]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "osd_id": 1,
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:16:02 compute-0 agitated_borg[312495]:         "type": "bluestore"
Oct 01 14:16:02 compute-0 agitated_borg[312495]:     }
Oct 01 14:16:02 compute-0 agitated_borg[312495]: }
Oct 01 14:16:02 compute-0 systemd[1]: libpod-5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65.scope: Deactivated successfully.
Oct 01 14:16:02 compute-0 systemd[1]: libpod-5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65.scope: Consumed 1.112s CPU time.
Oct 01 14:16:02 compute-0 podman[312478]: 2025-10-01 14:16:02.988013826 +0000 UTC m=+1.289670472 container died 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:16:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b65a7e6abf94e1fbe06259831a50419c1839bef7b748e4c394148b79f0f61e1b-merged.mount: Deactivated successfully.
Oct 01 14:16:03 compute-0 podman[312478]: 2025-10-01 14:16:03.134012791 +0000 UTC m=+1.435669417 container remove 5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:16:03 compute-0 systemd[1]: libpod-conmon-5e784cd2679917a23c82bc1c4f62c15971369729af31d07ebecf3babf7843e65.scope: Deactivated successfully.
Oct 01 14:16:03 compute-0 sudo[312373]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:16:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:16:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:16:03 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:16:03 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev acb8d47b-2691-4538-8f51-6ffd7466b62e does not exist
Oct 01 14:16:03 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 679255d6-ca09-495c-8b41-598276f9624d does not exist
Oct 01 14:16:03 compute-0 sudo[312542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:16:03 compute-0 sudo[312542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:16:03 compute-0 sudo[312542]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:03 compute-0 sudo[312567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:16:03 compute-0 sudo[312567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:16:03 compute-0 sudo[312567]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:04 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:16:04 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:16:04 compute-0 ceph-mon[74802]: pgmap v2205: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:05 compute-0 ceph-mon[74802]: pgmap v2206: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:07 compute-0 ceph-mon[74802]: pgmap v2207: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:09 compute-0 ceph-mon[74802]: pgmap v2208: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:11 compute-0 ceph-mon[74802]: pgmap v2209: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:16:12.342 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:16:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:16:12.344 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:16:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:16:12.344 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:16:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.046284) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173046376, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 525, "num_deletes": 250, "total_data_size": 517825, "memory_usage": 527072, "flush_reason": "Manual Compaction"}
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173052157, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 381410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44378, "largest_seqno": 44902, "table_properties": {"data_size": 378716, "index_size": 730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7330, "raw_average_key_size": 20, "raw_value_size": 373106, "raw_average_value_size": 1051, "num_data_blocks": 32, "num_entries": 355, "num_filter_entries": 355, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759328139, "oldest_key_time": 1759328139, "file_creation_time": 1759328173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 5911 microseconds, and 2081 cpu microseconds.
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.052211) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 381410 bytes OK
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.052227) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057017) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057032) EVENT_LOG_v1 {"time_micros": 1759328173057027, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057070) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 514812, prev total WAL file size 514812, number of live WAL files 2.
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057630) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303033' seq:0, type:0; will stop at (end)
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(372KB)], [104(9688KB)]
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173057693, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 10302248, "oldest_snapshot_seqno": -1}
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5896 keys, 7130680 bytes, temperature: kUnknown
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173140482, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 7130680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7093993, "index_size": 20833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 154324, "raw_average_key_size": 26, "raw_value_size": 6989699, "raw_average_value_size": 1185, "num_data_blocks": 821, "num_entries": 5896, "num_filter_entries": 5896, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.140769) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 7130680 bytes
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.150827) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.3 rd, 86.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(45.7) write-amplify(18.7) OK, records in: 6398, records dropped: 502 output_compression: NoCompression
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.150854) EVENT_LOG_v1 {"time_micros": 1759328173150842, "job": 62, "event": "compaction_finished", "compaction_time_micros": 82862, "compaction_time_cpu_micros": 18117, "output_level": 6, "num_output_files": 1, "total_output_size": 7130680, "num_input_records": 6398, "num_output_records": 5896, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173151116, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328173153795, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.057533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:16:13 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:16:13.153882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:16:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:14 compute-0 ceph-mon[74802]: pgmap v2210: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:15 compute-0 ceph-mon[74802]: pgmap v2211: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:17 compute-0 podman[312595]: 2025-10-01 14:16:17.538843957 +0000 UTC m=+0.083360818 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 01 14:16:17 compute-0 podman[312593]: 2025-10-01 14:16:17.551999694 +0000 UTC m=+0.094869993 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 01 14:16:17 compute-0 podman[312594]: 2025-10-01 14:16:17.562119625 +0000 UTC m=+0.105415667 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 01 14:16:17 compute-0 podman[312592]: 2025-10-01 14:16:17.594317818 +0000 UTC m=+0.137882659 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:16:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:17 compute-0 ceph-mon[74802]: pgmap v2212: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:16:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:16:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:19 compute-0 ceph-mon[74802]: pgmap v2213: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:22 compute-0 ceph-mon[74802]: pgmap v2214: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:24 compute-0 ceph-mon[74802]: pgmap v2215: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:25 compute-0 nova_compute[260022]: 2025-10-01 14:16:25.369 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:16:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:26 compute-0 ceph-mon[74802]: pgmap v2216: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:28 compute-0 ceph-mon[74802]: pgmap v2217: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:30 compute-0 ceph-mon[74802]: pgmap v2218: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:31 compute-0 nova_compute[260022]: 2025-10-01 14:16:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:16:31 compute-0 nova_compute[260022]: 2025-10-01 14:16:31.387 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:16:31 compute-0 nova_compute[260022]: 2025-10-01 14:16:31.388 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:16:31 compute-0 nova_compute[260022]: 2025-10-01 14:16:31.388 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:16:31 compute-0 nova_compute[260022]: 2025-10-01 14:16:31.389 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:16:31 compute-0 nova_compute[260022]: 2025-10-01 14:16:31.389 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:16:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:16:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902463455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:16:31 compute-0 nova_compute[260022]: 2025-10-01 14:16:31.854 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:16:31 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3902463455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.031 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.032 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5013MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.033 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.033 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.222 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.240 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.241 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.241 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.264 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.412 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.412 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.430 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.462 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.512 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:16:32 compute-0 ceph-mon[74802]: pgmap v2219: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:16:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322114946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.944 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:16:32 compute-0 nova_compute[260022]: 2025-10-01 14:16:32.952 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:16:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:33 compute-0 nova_compute[260022]: 2025-10-01 14:16:33.087 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:16:33 compute-0 nova_compute[260022]: 2025-10-01 14:16:33.090 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:16:33 compute-0 nova_compute[260022]: 2025-10-01 14:16:33.091 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:16:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:33 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2322114946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:16:34 compute-0 ceph-mon[74802]: pgmap v2220: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:36 compute-0 ceph-mon[74802]: pgmap v2221: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:37 compute-0 nova_compute[260022]: 2025-10-01 14:16:37.088 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:16:37 compute-0 nova_compute[260022]: 2025-10-01 14:16:37.088 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:16:37 compute-0 nova_compute[260022]: 2025-10-01 14:16:37.089 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:16:37 compute-0 nova_compute[260022]: 2025-10-01 14:16:37.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:16:37 compute-0 nova_compute[260022]: 2025-10-01 14:16:37.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:16:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:38 compute-0 ceph-mon[74802]: pgmap v2222: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:39 compute-0 sshd-session[312716]: Accepted publickey for zuul from 192.168.122.30 port 47408 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 14:16:39 compute-0 systemd-logind[818]: New session 54 of user zuul.
Oct 01 14:16:39 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct 01 14:16:39 compute-0 sshd-session[312716]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 14:16:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:39 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:16:39.809 161890 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'd2:60:33', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '86:05:a5:2a:6f:f1'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 01 14:16:39 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:16:39.812 161890 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 01 14:16:39 compute-0 sudo[312789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Oct 01 14:16:39 compute-0 sudo[312789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:39 compute-0 sudo[312789]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:39 compute-0 sudo[312815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Oct 01 14:16:39 compute-0 sudo[312815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 groupadd[312817]: group added to /etc/group: name=podman, GID=42479
Oct 01 14:16:40 compute-0 groupadd[312817]: group added to /etc/gshadow: name=podman
Oct 01 14:16:40 compute-0 groupadd[312817]: new group: name=podman, GID=42479
Oct 01 14:16:40 compute-0 sudo[312815]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:40 compute-0 sudo[312823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Oct 01 14:16:40 compute-0 sudo[312823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 usermod[312825]: add 'zuul' to group 'podman'
Oct 01 14:16:40 compute-0 usermod[312825]: add 'zuul' to shadow group 'podman'
Oct 01 14:16:40 compute-0 sudo[312823]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:40 compute-0 sudo[312832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Oct 01 14:16:40 compute-0 sudo[312832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 sudo[312832]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:40 compute-0 sudo[312835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Oct 01 14:16:40 compute-0 sudo[312835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 sudo[312835]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:40 compute-0 sudo[312838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Oct 01 14:16:40 compute-0 sudo[312838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 sudo[312838]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:40 compute-0 sudo[312841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Oct 01 14:16:40 compute-0 sudo[312841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 sudo[312841]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:40 compute-0 sudo[312844]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Oct 01 14:16:40 compute-0 sudo[312844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 sudo[312844]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:40 compute-0 sudo[312847]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Oct 01 14:16:40 compute-0 sudo[312847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 systemd[1]: Reloading.
Oct 01 14:16:40 compute-0 systemd-sysv-generator[312879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 14:16:40 compute-0 systemd-rc-local-generator[312876]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 14:16:40 compute-0 sudo[312847]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:40 compute-0 sudo[312884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Oct 01 14:16:40 compute-0 sudo[312884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 sudo[312884]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:40 compute-0 sudo[312887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Oct 01 14:16:40 compute-0 sudo[312887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:40 compute-0 systemd[1]: Reloading.
Oct 01 14:16:40 compute-0 ceph-mon[74802]: pgmap v2223: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:41 compute-0 systemd-rc-local-generator[312913]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 01 14:16:41 compute-0 systemd-sysv-generator[312917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 01 14:16:41 compute-0 systemd[1]: Starting Podman API Socket...
Oct 01 14:16:41 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 01 14:16:41 compute-0 sudo[312887]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:41 compute-0 nova_compute[260022]: 2025-10-01 14:16:41.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:16:41 compute-0 nova_compute[260022]: 2025-10-01 14:16:41.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:16:41 compute-0 nova_compute[260022]: 2025-10-01 14:16:41.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:16:41 compute-0 nova_compute[260022]: 2025-10-01 14:16:41.360 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:16:41 compute-0 sudo[312924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Oct 01 14:16:41 compute-0 sudo[312924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:41 compute-0 sudo[312924]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:41 compute-0 sudo[312927]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Oct 01 14:16:41 compute-0 sudo[312927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:41 compute-0 sudo[312927]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:41 compute-0 sudo[312930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Oct 01 14:16:41 compute-0 sudo[312930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:41 compute-0 sudo[312930]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:41 compute-0 sudo[312933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Oct 01 14:16:41 compute-0 sudo[312933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:41 compute-0 sudo[312933]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:41 compute-0 sudo[312936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Oct 01 14:16:41 compute-0 sudo[312936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:41 compute-0 sudo[312936]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:41 compute-0 sudo[312939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Oct 01 14:16:41 compute-0 dbus-broker-launch[786]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct 01 14:16:41 compute-0 sudo[312939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:41 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Oct 01 14:16:41 compute-0 systemd[1]: Closed Podman API Socket.
Oct 01 14:16:41 compute-0 systemd[1]: Stopping Podman API Socket...
Oct 01 14:16:41 compute-0 systemd[1]: Starting Podman API Socket...
Oct 01 14:16:41 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 01 14:16:41 compute-0 sudo[312939]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:41 compute-0 sudo[312792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Oct 01 14:16:41 compute-0 sudo[312792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:16:41 compute-0 sudo[312792]: pam_unix(sudo:session): session closed for user root
Oct 01 14:16:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:41 compute-0 sshd-session[312946]: Accepted publickey for zuul from 192.168.122.30 port 40382 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 14:16:41 compute-0 systemd-logind[818]: New session 55 of user zuul.
Oct 01 14:16:41 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct 01 14:16:41 compute-0 sshd-session[312946]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 14:16:41 compute-0 systemd[1]: Starting Podman API Service...
Oct 01 14:16:41 compute-0 systemd[1]: Started Podman API Service.
Oct 01 14:16:41 compute-0 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 01 14:16:41 compute-0 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="Setting parallel job count to 25"
Oct 01 14:16:41 compute-0 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="Using sqlite as database backend"
Oct 01 14:16:41 compute-0 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 01 14:16:41 compute-0 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 01 14:16:41 compute-0 podman[312950]: time="2025-10-01T14:16:41Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 01 14:16:41 compute-0 podman[312950]: @ - - [01/Oct/2025:14:16:41 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 01 14:16:41 compute-0 podman[312950]: @ - - [01/Oct/2025:14:16:41 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 27464 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct 01 14:16:42 compute-0 ceph-mon[74802]: pgmap v2224: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:43 compute-0 nova_compute[260022]: 2025-10-01 14:16:43.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:16:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:44 compute-0 nova_compute[260022]: 2025-10-01 14:16:44.347 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:16:44 compute-0 ceph-mon[74802]: pgmap v2225: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:45 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:16:45.814 161890 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7280030e-2ba6-406c-9fae-f8284a927c47, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 01 14:16:46 compute-0 ceph-mon[74802]: pgmap v2226: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:16:47
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', '.rgw.root', 'vms', 'images', 'default.rgw.control', 'volumes']
Oct 01 14:16:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:16:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:16:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:16:48 compute-0 podman[312988]: 2025-10-01 14:16:48.516158254 +0000 UTC m=+0.068472306 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:16:48 compute-0 podman[312987]: 2025-10-01 14:16:48.517570098 +0000 UTC m=+0.073598528 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:16:48 compute-0 podman[312989]: 2025-10-01 14:16:48.51919261 +0000 UTC m=+0.070962165 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 14:16:48 compute-0 podman[312986]: 2025-10-01 14:16:48.564570461 +0000 UTC m=+0.117790312 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 01 14:16:49 compute-0 ceph-mon[74802]: pgmap v2227: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:51 compute-0 ceph-mon[74802]: pgmap v2228: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:53 compute-0 ceph-mon[74802]: pgmap v2229: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:55 compute-0 ceph-mon[74802]: pgmap v2230: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:16:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/616910395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:16:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:16:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/616910395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:16:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/616910395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:16:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/616910395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:16:56 compute-0 podman[312950]: time="2025-10-01T14:16:56Z" level=info msg="Received shutdown.Stop(), terminating!" PID=312950
Oct 01 14:16:56 compute-0 systemd[1]: podman.service: Deactivated successfully.
Oct 01 14:16:57 compute-0 ceph-mon[74802]: pgmap v2231: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:16:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:16:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:16:59 compute-0 ceph-mon[74802]: pgmap v2232: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:16:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:01 compute-0 ceph-mon[74802]: pgmap v2233: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:03 compute-0 ceph-mon[74802]: pgmap v2234: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:03 compute-0 sudo[313066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:17:03 compute-0 sudo[313066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:03 compute-0 sudo[313066]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:03 compute-0 sudo[313091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:17:03 compute-0 sudo[313091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:03 compute-0 sudo[313091]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:03 compute-0 sudo[313116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:17:03 compute-0 sudo[313116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:03 compute-0 sudo[313116]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:03 compute-0 sudo[313141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:17:03 compute-0 sudo[313141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:04 compute-0 sudo[313141]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:17:04 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:17:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:17:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:17:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:17:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:17:04 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 91b34b8b-3ba1-4f4c-96c3-aed2458a759e does not exist
Oct 01 14:17:04 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f73ee4b9-baf7-4989-a4ec-907a0d0b152c does not exist
Oct 01 14:17:04 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 4bd58f0a-af9a-44d8-b2c8-369f26c2545b does not exist
Oct 01 14:17:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:17:04 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:17:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:17:04 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:17:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:17:04 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:17:04 compute-0 sudo[313198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:17:04 compute-0 sudo[313198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:04 compute-0 sudo[313198]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:04 compute-0 sudo[313223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:17:04 compute-0 sudo[313223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:04 compute-0 sudo[313223]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:04 compute-0 sudo[313248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:17:04 compute-0 sudo[313248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:04 compute-0 sudo[313248]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:04 compute-0 sudo[313273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:17:04 compute-0 sudo[313273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:05 compute-0 sudo[313347]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Oct 01 14:17:05 compute-0 podman[313340]: 2025-10-01 14:17:05.083450362 +0000 UTC m=+0.066067950 container create 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 14:17:05 compute-0 sudo[313347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:17:05 compute-0 sudo[313347]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:05 compute-0 systemd[1]: Started libpod-conmon-7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab.scope.
Oct 01 14:17:05 compute-0 podman[313340]: 2025-10-01 14:17:05.054486512 +0000 UTC m=+0.037104140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:17:05 compute-0 ceph-mon[74802]: pgmap v2235: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:17:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:17:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:17:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:17:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:17:05 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:17:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:17:05 compute-0 podman[313340]: 2025-10-01 14:17:05.215565079 +0000 UTC m=+0.198182657 container init 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:17:05 compute-0 podman[313340]: 2025-10-01 14:17:05.227569019 +0000 UTC m=+0.210186597 container start 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:17:05 compute-0 podman[313340]: 2025-10-01 14:17:05.231578287 +0000 UTC m=+0.214195865 container attach 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 14:17:05 compute-0 gifted_driscoll[313381]: 167 167
Oct 01 14:17:05 compute-0 systemd[1]: libpod-7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab.scope: Deactivated successfully.
Oct 01 14:17:05 compute-0 podman[313340]: 2025-10-01 14:17:05.237554267 +0000 UTC m=+0.220171845 container died 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:17:05 compute-0 sudo[313384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Oct 01 14:17:05 compute-0 sudo[313384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:17:05 compute-0 sudo[313384]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-20750985d6fade45db7795d25305daf5f7173803a432d577d3cd183c28bd86a2-merged.mount: Deactivated successfully.
Oct 01 14:17:05 compute-0 podman[313340]: 2025-10-01 14:17:05.284643322 +0000 UTC m=+0.267260900 container remove 7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:17:05 compute-0 systemd[1]: libpod-conmon-7643401a5e848250645abb7a3d69644d9fb0480203be8e763409d1d138e6e0ab.scope: Deactivated successfully.
Oct 01 14:17:05 compute-0 podman[313432]: 2025-10-01 14:17:05.505882839 +0000 UTC m=+0.056180165 container create 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:17:05 compute-0 systemd[1]: Started libpod-conmon-43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f.scope.
Oct 01 14:17:05 compute-0 podman[313432]: 2025-10-01 14:17:05.484635385 +0000 UTC m=+0.034932711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:17:05 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:05 compute-0 podman[313432]: 2025-10-01 14:17:05.609972035 +0000 UTC m=+0.160269361 container init 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 14:17:05 compute-0 podman[313432]: 2025-10-01 14:17:05.623847506 +0000 UTC m=+0.174144802 container start 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:17:05 compute-0 podman[313432]: 2025-10-01 14:17:05.627277755 +0000 UTC m=+0.177575051 container attach 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:17:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:06 compute-0 inspiring_leakey[313448]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:17:06 compute-0 inspiring_leakey[313448]: --> relative data size: 1.0
Oct 01 14:17:06 compute-0 inspiring_leakey[313448]: --> All data devices are unavailable
Oct 01 14:17:06 compute-0 systemd[1]: libpod-43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f.scope: Deactivated successfully.
Oct 01 14:17:06 compute-0 systemd[1]: libpod-43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f.scope: Consumed 1.148s CPU time.
Oct 01 14:17:06 compute-0 podman[313477]: 2025-10-01 14:17:06.884195535 +0000 UTC m=+0.044825935 container died 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 14:17:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c496d642a57eccfd9d549ea62770683131ab93ba698ea8ffbfdb1e73a2e861d-merged.mount: Deactivated successfully.
Oct 01 14:17:06 compute-0 podman[313477]: 2025-10-01 14:17:06.949123557 +0000 UTC m=+0.109753957 container remove 43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_leakey, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct 01 14:17:06 compute-0 systemd[1]: libpod-conmon-43a3a3205ff753c84711f65629ebe604c2bbe7f04a44882f289be8564a4d244f.scope: Deactivated successfully.
Oct 01 14:17:06 compute-0 sudo[313273]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:07 compute-0 sudo[313493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:17:07 compute-0 sudo[313493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:07 compute-0 sudo[313493]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:07 compute-0 ceph-mon[74802]: pgmap v2236: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:07 compute-0 sudo[313518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:17:07 compute-0 sudo[313518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:07 compute-0 sudo[313518]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:07 compute-0 sudo[313543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:17:07 compute-0 sudo[313543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:07 compute-0 sudo[313543]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:07 compute-0 sudo[313568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:17:07 compute-0 sudo[313568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:07 compute-0 podman[313635]: 2025-10-01 14:17:07.843356588 +0000 UTC m=+0.071840922 container create 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 14:17:07 compute-0 systemd[1]: Started libpod-conmon-4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db.scope.
Oct 01 14:17:07 compute-0 podman[313635]: 2025-10-01 14:17:07.812976044 +0000 UTC m=+0.041460448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:17:07 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:17:07 compute-0 podman[313635]: 2025-10-01 14:17:07.944461049 +0000 UTC m=+0.172945433 container init 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 14:17:07 compute-0 podman[313635]: 2025-10-01 14:17:07.956657777 +0000 UTC m=+0.185142121 container start 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:17:07 compute-0 podman[313635]: 2025-10-01 14:17:07.960673054 +0000 UTC m=+0.189157438 container attach 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:17:07 compute-0 nervous_bose[313652]: 167 167
Oct 01 14:17:07 compute-0 systemd[1]: libpod-4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db.scope: Deactivated successfully.
Oct 01 14:17:07 compute-0 conmon[313652]: conmon 4aa24c51bf8558f471a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db.scope/container/memory.events
Oct 01 14:17:07 compute-0 podman[313635]: 2025-10-01 14:17:07.967196791 +0000 UTC m=+0.195681125 container died 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:17:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cc6b2d8c4d0b0506536f12842e94e6625432de13fee071f4f31d7bc825800b5-merged.mount: Deactivated successfully.
Oct 01 14:17:08 compute-0 podman[313635]: 2025-10-01 14:17:08.013534073 +0000 UTC m=+0.242018377 container remove 4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:17:08 compute-0 systemd[1]: libpod-conmon-4aa24c51bf8558f471a3624efb1d1b07351b1d7ea05642d117b38466e52812db.scope: Deactivated successfully.
Oct 01 14:17:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:08 compute-0 podman[313676]: 2025-10-01 14:17:08.25555053 +0000 UTC m=+0.059414779 container create 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:17:08 compute-0 systemd[1]: Started libpod-conmon-6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1.scope.
Oct 01 14:17:08 compute-0 podman[313676]: 2025-10-01 14:17:08.227035944 +0000 UTC m=+0.030900293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:17:08 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:08 compute-0 podman[313676]: 2025-10-01 14:17:08.379959491 +0000 UTC m=+0.183823850 container init 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 14:17:08 compute-0 podman[313676]: 2025-10-01 14:17:08.399949796 +0000 UTC m=+0.203814055 container start 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 01 14:17:08 compute-0 podman[313676]: 2025-10-01 14:17:08.404661775 +0000 UTC m=+0.208526064 container attach 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 14:17:08 compute-0 sshd-session[312719]: Connection closed by 192.168.122.30 port 47408
Oct 01 14:17:08 compute-0 sshd-session[312716]: pam_unix(sshd:session): session closed for user zuul
Oct 01 14:17:08 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct 01 14:17:08 compute-0 systemd[1]: session-54.scope: Consumed 1.507s CPU time.
Oct 01 14:17:08 compute-0 systemd-logind[818]: Session 54 logged out. Waiting for processes to exit.
Oct 01 14:17:08 compute-0 systemd-logind[818]: Removed session 54.
Oct 01 14:17:09 compute-0 ceph-mon[74802]: pgmap v2237: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:09 compute-0 stoic_colden[313692]: {
Oct 01 14:17:09 compute-0 stoic_colden[313692]:     "0": [
Oct 01 14:17:09 compute-0 stoic_colden[313692]:         {
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "devices": [
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "/dev/loop3"
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             ],
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_name": "ceph_lv0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_size": "21470642176",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "name": "ceph_lv0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "tags": {
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.cluster_name": "ceph",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.crush_device_class": "",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.encrypted": "0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.osd_id": "0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.type": "block",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.vdo": "0"
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             },
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "type": "block",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "vg_name": "ceph_vg0"
Oct 01 14:17:09 compute-0 stoic_colden[313692]:         }
Oct 01 14:17:09 compute-0 stoic_colden[313692]:     ],
Oct 01 14:17:09 compute-0 stoic_colden[313692]:     "1": [
Oct 01 14:17:09 compute-0 stoic_colden[313692]:         {
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "devices": [
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "/dev/loop4"
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             ],
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_name": "ceph_lv1",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_size": "21470642176",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "name": "ceph_lv1",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "tags": {
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.cluster_name": "ceph",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.crush_device_class": "",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.encrypted": "0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.osd_id": "1",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.type": "block",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.vdo": "0"
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             },
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "type": "block",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "vg_name": "ceph_vg1"
Oct 01 14:17:09 compute-0 stoic_colden[313692]:         }
Oct 01 14:17:09 compute-0 stoic_colden[313692]:     ],
Oct 01 14:17:09 compute-0 stoic_colden[313692]:     "2": [
Oct 01 14:17:09 compute-0 stoic_colden[313692]:         {
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "devices": [
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "/dev/loop5"
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             ],
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_name": "ceph_lv2",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_size": "21470642176",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "name": "ceph_lv2",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "tags": {
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.cluster_name": "ceph",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.crush_device_class": "",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.encrypted": "0",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.osd_id": "2",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.type": "block",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:                 "ceph.vdo": "0"
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             },
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "type": "block",
Oct 01 14:17:09 compute-0 stoic_colden[313692]:             "vg_name": "ceph_vg2"
Oct 01 14:17:09 compute-0 stoic_colden[313692]:         }
Oct 01 14:17:09 compute-0 stoic_colden[313692]:     ]
Oct 01 14:17:09 compute-0 stoic_colden[313692]: }
Oct 01 14:17:09 compute-0 systemd[1]: libpod-6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1.scope: Deactivated successfully.
Oct 01 14:17:09 compute-0 podman[313676]: 2025-10-01 14:17:09.261002304 +0000 UTC m=+1.064866593 container died 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:17:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf00eae6ca2f6b1eeddf5845b818056937c628aafbc46c727486a907c4ec0f63-merged.mount: Deactivated successfully.
Oct 01 14:17:09 compute-0 podman[313676]: 2025-10-01 14:17:09.332211195 +0000 UTC m=+1.136075444 container remove 6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 14:17:09 compute-0 systemd[1]: libpod-conmon-6a55facb277a400dace2f919c8643261c468ee5b3e2373299e2e00df83f554d1.scope: Deactivated successfully.
Oct 01 14:17:09 compute-0 nova_compute[260022]: 2025-10-01 14:17:09.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:09 compute-0 sudo[313568]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:09 compute-0 sudo[313714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:17:09 compute-0 sudo[313714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:09 compute-0 sudo[313714]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:09 compute-0 sudo[313739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:17:09 compute-0 sudo[313739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:09 compute-0 sudo[313739]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:09 compute-0 sudo[313764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:17:09 compute-0 sudo[313764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:09 compute-0 sudo[313764]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:09 compute-0 sudo[313789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:17:09 compute-0 sudo[313789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:10 compute-0 sshd-session[312949]: Connection closed by 192.168.122.30 port 40382
Oct 01 14:17:10 compute-0 sshd-session[312946]: pam_unix(sshd:session): session closed for user zuul
Oct 01 14:17:10 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct 01 14:17:10 compute-0 systemd-logind[818]: Session 55 logged out. Waiting for processes to exit.
Oct 01 14:17:10 compute-0 systemd-logind[818]: Removed session 55.
Oct 01 14:17:10 compute-0 podman[313854]: 2025-10-01 14:17:10.115462131 +0000 UTC m=+0.047301853 container create bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 14:17:10 compute-0 systemd[1]: Started libpod-conmon-bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904.scope.
Oct 01 14:17:10 compute-0 podman[313854]: 2025-10-01 14:17:10.095110225 +0000 UTC m=+0.026949937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:17:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:17:10 compute-0 podman[313854]: 2025-10-01 14:17:10.235528325 +0000 UTC m=+0.167368107 container init bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 14:17:10 compute-0 podman[313854]: 2025-10-01 14:17:10.24355403 +0000 UTC m=+0.175393732 container start bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:17:10 compute-0 podman[313854]: 2025-10-01 14:17:10.247306198 +0000 UTC m=+0.179145880 container attach bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 14:17:10 compute-0 nostalgic_merkle[313870]: 167 167
Oct 01 14:17:10 compute-0 systemd[1]: libpod-bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904.scope: Deactivated successfully.
Oct 01 14:17:10 compute-0 podman[313854]: 2025-10-01 14:17:10.249622212 +0000 UTC m=+0.181461944 container died bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 14:17:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cfa301def8cfc6c94f0df3878ffbefc1ccd0c63a5bf185a035249ff0a8d076d-merged.mount: Deactivated successfully.
Oct 01 14:17:10 compute-0 podman[313854]: 2025-10-01 14:17:10.299660622 +0000 UTC m=+0.231500344 container remove bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:17:10 compute-0 systemd[1]: libpod-conmon-bbcbc3cb8d67c851877a8012d5a10ab48855a0f8177a81e48ccc05094ad88904.scope: Deactivated successfully.
Oct 01 14:17:10 compute-0 podman[313894]: 2025-10-01 14:17:10.555240549 +0000 UTC m=+0.065368187 container create 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:17:10 compute-0 systemd[1]: Started libpod-conmon-4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88.scope.
Oct 01 14:17:10 compute-0 podman[313894]: 2025-10-01 14:17:10.526004441 +0000 UTC m=+0.036132149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:17:10 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:17:10 compute-0 podman[313894]: 2025-10-01 14:17:10.678485843 +0000 UTC m=+0.188613511 container init 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:17:10 compute-0 podman[313894]: 2025-10-01 14:17:10.69537809 +0000 UTC m=+0.205505728 container start 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 01 14:17:10 compute-0 podman[313894]: 2025-10-01 14:17:10.701000968 +0000 UTC m=+0.211128606 container attach 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:17:11 compute-0 ceph-mon[74802]: pgmap v2238: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]: {
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "osd_id": 0,
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "type": "bluestore"
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:     },
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "osd_id": 2,
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "type": "bluestore"
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:     },
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "osd_id": 1,
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:         "type": "bluestore"
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]:     }
Oct 01 14:17:11 compute-0 wonderful_hypatia[313910]: }
Oct 01 14:17:11 compute-0 systemd[1]: libpod-4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88.scope: Deactivated successfully.
Oct 01 14:17:11 compute-0 podman[313894]: 2025-10-01 14:17:11.749415167 +0000 UTC m=+1.259542865 container died 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:17:11 compute-0 systemd[1]: libpod-4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88.scope: Consumed 1.062s CPU time.
Oct 01 14:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2168e197028a59f89ee4dfbc2d816b97a7fe0c08cafbe3cf99ccf3d706432a65-merged.mount: Deactivated successfully.
Oct 01 14:17:11 compute-0 podman[313894]: 2025-10-01 14:17:11.814320058 +0000 UTC m=+1.324447666 container remove 4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:17:11 compute-0 systemd[1]: libpod-conmon-4fb190c2428d01d44962ab4857cf2087384f6b9922e911e26a381f12f86e4d88.scope: Deactivated successfully.
Oct 01 14:17:11 compute-0 sudo[313789]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:17:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:17:11 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:17:11 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:17:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 81f334ab-3389-4db8-abf0-6ad5f5cdd1fd does not exist
Oct 01 14:17:11 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 135b6a4f-3d84-41d0-852c-dbc71493a5ca does not exist
Oct 01 14:17:11 compute-0 sudo[313955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:17:11 compute-0 sudo[313955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:11 compute-0 sudo[313955]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:12 compute-0 sudo[313980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:17:12 compute-0 sudo[313980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:17:12 compute-0 sudo[313980]: pam_unix(sudo:session): session closed for user root
Oct 01 14:17:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:17:12.343 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:17:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:17:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:17:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:17:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:17:12 compute-0 ceph-mon[74802]: pgmap v2239: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:12 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:17:12 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:17:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:14 compute-0 ceph-mon[74802]: pgmap v2240: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:16 compute-0 ceph-mon[74802]: pgmap v2241: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:17:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:17:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:18 compute-0 ceph-mon[74802]: pgmap v2242: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:19 compute-0 podman[314008]: 2025-10-01 14:17:19.544269966 +0000 UTC m=+0.084303708 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:17:19 compute-0 podman[314006]: 2025-10-01 14:17:19.551038281 +0000 UTC m=+0.098059546 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 01 14:17:19 compute-0 podman[314007]: 2025-10-01 14:17:19.551986531 +0000 UTC m=+0.093915414 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 01 14:17:19 compute-0 podman[314005]: 2025-10-01 14:17:19.600546023 +0000 UTC m=+0.148404694 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 14:17:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:20 compute-0 ceph-mon[74802]: pgmap v2243: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:22 compute-0 ceph-mon[74802]: pgmap v2244: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:24 compute-0 ceph-mon[74802]: pgmap v2245: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:25 compute-0 nova_compute[260022]: 2025-10-01 14:17:25.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:26 compute-0 ceph-mon[74802]: pgmap v2246: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:28 compute-0 ceph-mon[74802]: pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:31 compute-0 ceph-mon[74802]: pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:31 compute-0 nova_compute[260022]: 2025-10-01 14:17:31.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:31 compute-0 nova_compute[260022]: 2025-10-01 14:17:31.367 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:17:31 compute-0 nova_compute[260022]: 2025-10-01 14:17:31.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:17:31 compute-0 nova_compute[260022]: 2025-10-01 14:17:31.368 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:17:31 compute-0 nova_compute[260022]: 2025-10-01 14:17:31.368 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:17:31 compute-0 nova_compute[260022]: 2025-10-01 14:17:31.369 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:17:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:17:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282818037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:17:31 compute-0 nova_compute[260022]: 2025-10-01 14:17:31.845 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:17:32 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3282818037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.059 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.062 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5027MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.062 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.063 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.145 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.163 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.164 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.164 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.226 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:17:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:17:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997633879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.735 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.740 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.910 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.911 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:17:32 compute-0 nova_compute[260022]: 2025-10-01 14:17:32.911 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:17:33 compute-0 ceph-mon[74802]: pgmap v2249: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:33 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/997633879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:17:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:35 compute-0 ceph-mon[74802]: pgmap v2250: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:36 compute-0 nova_compute[260022]: 2025-10-01 14:17:36.907 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:36 compute-0 nova_compute[260022]: 2025-10-01 14:17:36.908 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:36 compute-0 nova_compute[260022]: 2025-10-01 14:17:36.909 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:17:37 compute-0 ceph-mon[74802]: pgmap v2251: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:37 compute-0 nova_compute[260022]: 2025-10-01 14:17:37.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:38 compute-0 nova_compute[260022]: 2025-10-01 14:17:38.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:39 compute-0 ceph-mon[74802]: pgmap v2252: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:41 compute-0 ceph-mon[74802]: pgmap v2253: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:42 compute-0 nova_compute[260022]: 2025-10-01 14:17:42.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:42 compute-0 nova_compute[260022]: 2025-10-01 14:17:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:17:42 compute-0 nova_compute[260022]: 2025-10-01 14:17:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:17:42 compute-0 nova_compute[260022]: 2025-10-01 14:17:42.437 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:17:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:43 compute-0 ceph-mon[74802]: pgmap v2254: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:44 compute-0 nova_compute[260022]: 2025-10-01 14:17:44.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:45 compute-0 nova_compute[260022]: 2025-10-01 14:17:45.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:17:45 compute-0 ceph-mon[74802]: pgmap v2255: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:17:47 compute-0 ceph-mon[74802]: pgmap v2256: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:17:47
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'images', '.rgw.root', 'default.rgw.control']
Oct 01 14:17:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:17:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:17:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:17:49 compute-0 ceph-mon[74802]: pgmap v2257: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:50 compute-0 podman[314127]: 2025-10-01 14:17:50.535049229 +0000 UTC m=+0.074826209 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 01 14:17:50 compute-0 podman[314128]: 2025-10-01 14:17:50.561687684 +0000 UTC m=+0.089493203 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 01 14:17:50 compute-0 podman[314126]: 2025-10-01 14:17:50.565905969 +0000 UTC m=+0.111216584 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 01 14:17:50 compute-0 podman[314129]: 2025-10-01 14:17:50.570585627 +0000 UTC m=+0.093555883 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 01 14:17:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:51 compute-0 ceph-mon[74802]: pgmap v2258: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:53 compute-0 ceph-mon[74802]: pgmap v2259: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:54 compute-0 ceph-mon[74802]: pgmap v2260: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:17:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2478297574' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:17:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:17:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2478297574' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:17:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2478297574' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:17:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/2478297574' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:17:56 compute-0 ceph-mon[74802]: pgmap v2261: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:17:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:17:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:17:59 compute-0 ceph-mon[74802]: pgmap v2262: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:17:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:18:00 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1351 writes, 6361 keys, 1351 commit groups, 1.0 writes per commit group, ingest: 8.80 MB, 0.01 MB/s
                                           Interval WAL: 1351 writes, 1351 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     20.8      2.65              0.22        31    0.086       0      0       0.0       0.0
                                             L6      1/0    6.80 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3     64.0     53.1      4.46              0.89        30    0.149    163K    16K       0.0       0.0
                                            Sum      1/0    6.80 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     40.1     41.1      7.11              1.10        61    0.117    163K    16K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.8    141.2    137.3      0.41              0.19        12    0.034     38K   3090       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     64.0     53.1      4.46              0.89        30    0.149    163K    16K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     20.9      2.64              0.22        30    0.088       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      4.6      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.054, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.07 MB/s write, 0.28 GB read, 0.07 MB/s read, 7.1 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55daa55431f0#2 capacity: 304.00 MB usage: 32.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000291 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2106,31.56 MB,10.3819%) FilterBlock(62,460.05 KB,0.147784%) IndexBlock(62,792.88 KB,0.254701%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 01 14:18:01 compute-0 ceph-mon[74802]: pgmap v2263: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:03 compute-0 ceph-mon[74802]: pgmap v2264: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:05 compute-0 ceph-mon[74802]: pgmap v2265: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:07 compute-0 ceph-mon[74802]: pgmap v2266: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:09 compute-0 ceph-mon[74802]: pgmap v2267: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:11 compute-0 ceph-mon[74802]: pgmap v2268: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:12 compute-0 sudo[314206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:18:12 compute-0 sudo[314206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:12 compute-0 sudo[314206]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:12 compute-0 sudo[314231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:18:12 compute-0 sudo[314231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:12 compute-0 sudo[314231]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:12 compute-0 sudo[314256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:18:12 compute-0 sudo[314256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:12 compute-0 sudo[314256]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:12 compute-0 sudo[314281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:18:12 compute-0 sudo[314281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:18:12.344 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:18:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:18:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:18:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:18:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:18:12 compute-0 sudo[314281]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:18:12 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:18:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:18:12 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:18:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:18:12 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:18:12 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev ab93582e-5adb-47dd-90a3-10b5a1a31189 does not exist
Oct 01 14:18:12 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 5dc8939c-f424-4d3a-8f88-9e4a41b940ff does not exist
Oct 01 14:18:12 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2c8bfb06-c873-465d-aff1-9fe8b4c6b88f does not exist
Oct 01 14:18:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:18:12 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:18:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:18:12 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:18:12 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:18:12 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:18:12 compute-0 sudo[314337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:18:12 compute-0 sudo[314337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:12 compute-0 sudo[314337]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:12 compute-0 sudo[314362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:18:12 compute-0 sudo[314362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:12 compute-0 sudo[314362]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:13 compute-0 sudo[314387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:18:13 compute-0 sudo[314387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:13 compute-0 sudo[314387]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:13 compute-0 sudo[314412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:18:13 compute-0 sudo[314412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:13 compute-0 podman[314477]: 2025-10-01 14:18:13.394908272 +0000 UTC m=+0.036880932 container create 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:18:13 compute-0 systemd[1]: Started libpod-conmon-7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd.scope.
Oct 01 14:18:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:18:13 compute-0 podman[314477]: 2025-10-01 14:18:13.469332966 +0000 UTC m=+0.111305636 container init 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 14:18:13 compute-0 podman[314477]: 2025-10-01 14:18:13.379362299 +0000 UTC m=+0.021334989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:18:13 compute-0 podman[314477]: 2025-10-01 14:18:13.476945928 +0000 UTC m=+0.118918588 container start 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:18:13 compute-0 podman[314477]: 2025-10-01 14:18:13.480408168 +0000 UTC m=+0.122380838 container attach 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 14:18:13 compute-0 systemd[1]: libpod-7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd.scope: Deactivated successfully.
Oct 01 14:18:13 compute-0 quirky_mccarthy[314494]: 167 167
Oct 01 14:18:13 compute-0 conmon[314494]: conmon 7861ac2c3252e103c84d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd.scope/container/memory.events
Oct 01 14:18:13 compute-0 podman[314477]: 2025-10-01 14:18:13.482822995 +0000 UTC m=+0.124795655 container died 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9625170523a709754b9e80c2bb3200bda5d2b2e6039069c784f47c9692f0344d-merged.mount: Deactivated successfully.
Oct 01 14:18:13 compute-0 ceph-mon[74802]: pgmap v2269: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:18:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:18:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:18:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:18:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:18:13 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:18:13 compute-0 podman[314477]: 2025-10-01 14:18:13.590414571 +0000 UTC m=+0.232387231 container remove 7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 14:18:13 compute-0 systemd[1]: libpod-conmon-7861ac2c3252e103c84d2a42b958c9a795552edeecf4956e6764380ee9c656fd.scope: Deactivated successfully.
Oct 01 14:18:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:13 compute-0 podman[314517]: 2025-10-01 14:18:13.767128104 +0000 UTC m=+0.039121224 container create 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 14:18:13 compute-0 systemd[1]: Started libpod-conmon-8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5.scope.
Oct 01 14:18:13 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:13 compute-0 podman[314517]: 2025-10-01 14:18:13.833141401 +0000 UTC m=+0.105134541 container init 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct 01 14:18:13 compute-0 podman[314517]: 2025-10-01 14:18:13.837981375 +0000 UTC m=+0.109974495 container start 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 14:18:13 compute-0 podman[314517]: 2025-10-01 14:18:13.841043012 +0000 UTC m=+0.113036132 container attach 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 01 14:18:13 compute-0 podman[314517]: 2025-10-01 14:18:13.752506589 +0000 UTC m=+0.024499729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:18:14 compute-0 friendly_easley[314534]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:18:14 compute-0 friendly_easley[314534]: --> relative data size: 1.0
Oct 01 14:18:14 compute-0 friendly_easley[314534]: --> All data devices are unavailable
Oct 01 14:18:14 compute-0 systemd[1]: libpod-8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5.scope: Deactivated successfully.
Oct 01 14:18:14 compute-0 podman[314517]: 2025-10-01 14:18:14.788612687 +0000 UTC m=+1.060605807 container died 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:18:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-69ee6c22a01b3c4ec6001e0ca3349d77366fe5d35debb91a8d4685e3b5425c6a-merged.mount: Deactivated successfully.
Oct 01 14:18:14 compute-0 podman[314517]: 2025-10-01 14:18:14.850368678 +0000 UTC m=+1.122361798 container remove 8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct 01 14:18:14 compute-0 systemd[1]: libpod-conmon-8bf54b49c31edf7c5242427f3562dd2fa1d8dcae1f65713b5161f8a9735dccb5.scope: Deactivated successfully.
Oct 01 14:18:14 compute-0 sudo[314412]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:14 compute-0 sudo[314577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:18:14 compute-0 sudo[314577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:14 compute-0 sudo[314577]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:14 compute-0 sudo[314602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:18:14 compute-0 sudo[314602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:14 compute-0 sudo[314602]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:15 compute-0 sudo[314627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:18:15 compute-0 sudo[314627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:15 compute-0 sudo[314627]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:15 compute-0 sudo[314652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:18:15 compute-0 sudo[314652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:15 compute-0 podman[314716]: 2025-10-01 14:18:15.435264725 +0000 UTC m=+0.039892867 container create 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 14:18:15 compute-0 systemd[1]: Started libpod-conmon-82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad.scope.
Oct 01 14:18:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:18:15 compute-0 podman[314716]: 2025-10-01 14:18:15.507073326 +0000 UTC m=+0.111701508 container init 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 14:18:15 compute-0 podman[314716]: 2025-10-01 14:18:15.418626626 +0000 UTC m=+0.023254808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:18:15 compute-0 podman[314716]: 2025-10-01 14:18:15.514184372 +0000 UTC m=+0.118812524 container start 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct 01 14:18:15 compute-0 nifty_faraday[314733]: 167 167
Oct 01 14:18:15 compute-0 podman[314716]: 2025-10-01 14:18:15.517789846 +0000 UTC m=+0.122418048 container attach 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:18:15 compute-0 systemd[1]: libpod-82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad.scope: Deactivated successfully.
Oct 01 14:18:15 compute-0 conmon[314733]: conmon 82c5b3ec0d91c6079e8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad.scope/container/memory.events
Oct 01 14:18:15 compute-0 podman[314716]: 2025-10-01 14:18:15.519755108 +0000 UTC m=+0.124383280 container died 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaf49e8e56fff7b3f32fef2ab768580374544c5d57bb6dd157d5132d38d27144-merged.mount: Deactivated successfully.
Oct 01 14:18:15 compute-0 ceph-mon[74802]: pgmap v2270: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:15 compute-0 podman[314716]: 2025-10-01 14:18:15.573593859 +0000 UTC m=+0.178222001 container remove 82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:18:15 compute-0 systemd[1]: libpod-conmon-82c5b3ec0d91c6079e8c26ecfd9b7b9cf9309f2f385e8b2ea88a7fd206a1bfad.scope: Deactivated successfully.
Oct 01 14:18:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:15 compute-0 podman[314758]: 2025-10-01 14:18:15.721487386 +0000 UTC m=+0.039463175 container create 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:18:15 compute-0 systemd[1]: Started libpod-conmon-70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546.scope.
Oct 01 14:18:15 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:15 compute-0 podman[314758]: 2025-10-01 14:18:15.785499819 +0000 UTC m=+0.103475628 container init 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 14:18:15 compute-0 podman[314758]: 2025-10-01 14:18:15.795083593 +0000 UTC m=+0.113059382 container start 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 01 14:18:15 compute-0 podman[314758]: 2025-10-01 14:18:15.703162704 +0000 UTC m=+0.021138523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:18:15 compute-0 podman[314758]: 2025-10-01 14:18:15.799242585 +0000 UTC m=+0.117218384 container attach 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 14:18:16 compute-0 cool_pare[314774]: {
Oct 01 14:18:16 compute-0 cool_pare[314774]:     "0": [
Oct 01 14:18:16 compute-0 cool_pare[314774]:         {
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "devices": [
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "/dev/loop3"
Oct 01 14:18:16 compute-0 cool_pare[314774]:             ],
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_name": "ceph_lv0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_size": "21470642176",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "name": "ceph_lv0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "tags": {
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.cluster_name": "ceph",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.crush_device_class": "",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.encrypted": "0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.osd_id": "0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.type": "block",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.vdo": "0"
Oct 01 14:18:16 compute-0 cool_pare[314774]:             },
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "type": "block",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "vg_name": "ceph_vg0"
Oct 01 14:18:16 compute-0 cool_pare[314774]:         }
Oct 01 14:18:16 compute-0 cool_pare[314774]:     ],
Oct 01 14:18:16 compute-0 cool_pare[314774]:     "1": [
Oct 01 14:18:16 compute-0 cool_pare[314774]:         {
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "devices": [
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "/dev/loop4"
Oct 01 14:18:16 compute-0 cool_pare[314774]:             ],
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_name": "ceph_lv1",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_size": "21470642176",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "name": "ceph_lv1",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "tags": {
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.cluster_name": "ceph",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.crush_device_class": "",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.encrypted": "0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.osd_id": "1",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.type": "block",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.vdo": "0"
Oct 01 14:18:16 compute-0 cool_pare[314774]:             },
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "type": "block",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "vg_name": "ceph_vg1"
Oct 01 14:18:16 compute-0 cool_pare[314774]:         }
Oct 01 14:18:16 compute-0 cool_pare[314774]:     ],
Oct 01 14:18:16 compute-0 cool_pare[314774]:     "2": [
Oct 01 14:18:16 compute-0 cool_pare[314774]:         {
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "devices": [
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "/dev/loop5"
Oct 01 14:18:16 compute-0 cool_pare[314774]:             ],
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_name": "ceph_lv2",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_size": "21470642176",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "name": "ceph_lv2",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "tags": {
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.cluster_name": "ceph",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.crush_device_class": "",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.encrypted": "0",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.osd_id": "2",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.type": "block",
Oct 01 14:18:16 compute-0 cool_pare[314774]:                 "ceph.vdo": "0"
Oct 01 14:18:16 compute-0 cool_pare[314774]:             },
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "type": "block",
Oct 01 14:18:16 compute-0 cool_pare[314774]:             "vg_name": "ceph_vg2"
Oct 01 14:18:16 compute-0 cool_pare[314774]:         }
Oct 01 14:18:16 compute-0 cool_pare[314774]:     ]
Oct 01 14:18:16 compute-0 cool_pare[314774]: }
Oct 01 14:18:16 compute-0 systemd[1]: libpod-70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546.scope: Deactivated successfully.
Oct 01 14:18:16 compute-0 podman[314758]: 2025-10-01 14:18:16.516501696 +0000 UTC m=+0.834477495 container died 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 01 14:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-432b171598b99bb95d0e1d15a46c35cd728b983c19d343afb836b94359a6cbd6-merged.mount: Deactivated successfully.
Oct 01 14:18:16 compute-0 podman[314758]: 2025-10-01 14:18:16.58024676 +0000 UTC m=+0.898222569 container remove 70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 01 14:18:16 compute-0 systemd[1]: libpod-conmon-70c3bffb04d4710606934f078e9fda53b0ac462abe5c89c120ad3a015df01546.scope: Deactivated successfully.
Oct 01 14:18:16 compute-0 sudo[314652]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:16 compute-0 sudo[314796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:18:16 compute-0 sudo[314796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:16 compute-0 sudo[314796]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:16 compute-0 sudo[314821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:18:16 compute-0 sudo[314821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:16 compute-0 sudo[314821]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:16 compute-0 sudo[314846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:18:16 compute-0 sudo[314846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:16 compute-0 sudo[314846]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:16 compute-0 sudo[314871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:18:16 compute-0 sudo[314871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:17 compute-0 podman[314936]: 2025-10-01 14:18:17.118742253 +0000 UTC m=+0.039779174 container create 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 14:18:17 compute-0 systemd[1]: Started libpod-conmon-444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51.scope.
Oct 01 14:18:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:18:17 compute-0 podman[314936]: 2025-10-01 14:18:17.101403983 +0000 UTC m=+0.022440934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:18:17 compute-0 podman[314936]: 2025-10-01 14:18:17.199274271 +0000 UTC m=+0.120311222 container init 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 14:18:17 compute-0 podman[314936]: 2025-10-01 14:18:17.20554962 +0000 UTC m=+0.126586541 container start 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 14:18:17 compute-0 podman[314936]: 2025-10-01 14:18:17.209090693 +0000 UTC m=+0.130127614 container attach 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 01 14:18:17 compute-0 great_driscoll[314953]: 167 167
Oct 01 14:18:17 compute-0 systemd[1]: libpod-444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51.scope: Deactivated successfully.
Oct 01 14:18:17 compute-0 podman[314936]: 2025-10-01 14:18:17.212310966 +0000 UTC m=+0.133347917 container died 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 01 14:18:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb4f43684d44a298a0b6d86a0b7347a73e37d6ac3672e34bef05a2d893ca3178-merged.mount: Deactivated successfully.
Oct 01 14:18:17 compute-0 podman[314936]: 2025-10-01 14:18:17.272051282 +0000 UTC m=+0.193088233 container remove 444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:18:17 compute-0 systemd[1]: libpod-conmon-444fa15d0fea3f7d0318ea3ee5de74bf07d074a8d4043364beb7940ff9ed8c51.scope: Deactivated successfully.
Oct 01 14:18:17 compute-0 podman[314979]: 2025-10-01 14:18:17.46498018 +0000 UTC m=+0.039436684 container create 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 14:18:17 compute-0 systemd[1]: Started libpod-conmon-75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773.scope.
Oct 01 14:18:17 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:18:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:18:17 compute-0 podman[314979]: 2025-10-01 14:18:17.538679081 +0000 UTC m=+0.113135615 container init 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 01 14:18:17 compute-0 podman[314979]: 2025-10-01 14:18:17.448363842 +0000 UTC m=+0.022820366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:18:17 compute-0 podman[314979]: 2025-10-01 14:18:17.546074145 +0000 UTC m=+0.120530649 container start 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:18:17 compute-0 podman[314979]: 2025-10-01 14:18:17.550626321 +0000 UTC m=+0.125082905 container attach 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:18:17 compute-0 ceph-mon[74802]: pgmap v2271: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:18:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:18:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:18 compute-0 wizardly_carson[314996]: {
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "osd_id": 0,
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "type": "bluestore"
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:     },
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "osd_id": 2,
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "type": "bluestore"
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:     },
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "osd_id": 1,
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:         "type": "bluestore"
Oct 01 14:18:18 compute-0 wizardly_carson[314996]:     }
Oct 01 14:18:18 compute-0 wizardly_carson[314996]: }
Oct 01 14:18:18 compute-0 systemd[1]: libpod-75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773.scope: Deactivated successfully.
Oct 01 14:18:18 compute-0 systemd[1]: libpod-75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773.scope: Consumed 1.034s CPU time.
Oct 01 14:18:18 compute-0 podman[314979]: 2025-10-01 14:18:18.572244878 +0000 UTC m=+1.146701382 container died 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct 01 14:18:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-04e410fd9c6da367ec95ecbe8b92d656ee3159ac534e65a22d7d7a42d0a02d11-merged.mount: Deactivated successfully.
Oct 01 14:18:18 compute-0 podman[314979]: 2025-10-01 14:18:18.633653738 +0000 UTC m=+1.208110242 container remove 75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:18:18 compute-0 systemd[1]: libpod-conmon-75b224f16b293477d1bb650d61dd0edc78b497c4cc81cc29ce2a732d687b2773.scope: Deactivated successfully.
Oct 01 14:18:18 compute-0 sudo[314871]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:18:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:18:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:18:18 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:18:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev c429f253-72e7-4786-ae89-34cf598d6f88 does not exist
Oct 01 14:18:18 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 47a3353e-4d07-4358-abde-335fdb3247c2 does not exist
Oct 01 14:18:18 compute-0 sudo[315040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:18:18 compute-0 sudo[315040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:18 compute-0 sudo[315040]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:18 compute-0 sudo[315065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:18:18 compute-0 sudo[315065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:18:18 compute-0 sudo[315065]: pam_unix(sudo:session): session closed for user root
Oct 01 14:18:19 compute-0 ceph-mon[74802]: pgmap v2272: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:18:19 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:18:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:21 compute-0 podman[315092]: 2025-10-01 14:18:21.519620938 +0000 UTC m=+0.065077198 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20250923)
Oct 01 14:18:21 compute-0 podman[315093]: 2025-10-01 14:18:21.521769116 +0000 UTC m=+0.062647100 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 14:18:21 compute-0 podman[315091]: 2025-10-01 14:18:21.527392794 +0000 UTC m=+0.075330393 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 01 14:18:21 compute-0 podman[315090]: 2025-10-01 14:18:21.569724309 +0000 UTC m=+0.120920112 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 14:18:21 compute-0 ceph-mon[74802]: pgmap v2273: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:22 compute-0 ceph-mon[74802]: pgmap v2274: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:24 compute-0 ceph-mon[74802]: pgmap v2275: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:25 compute-0 nova_compute[260022]: 2025-10-01 14:18:25.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:18:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:26 compute-0 ceph-mon[74802]: pgmap v2276: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:28 compute-0 ceph-mon[74802]: pgmap v2277: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:31 compute-0 ceph-mon[74802]: pgmap v2278: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:31 compute-0 nova_compute[260022]: 2025-10-01 14:18:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:18:31 compute-0 nova_compute[260022]: 2025-10-01 14:18:31.489 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:18:31 compute-0 nova_compute[260022]: 2025-10-01 14:18:31.490 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:18:31 compute-0 nova_compute[260022]: 2025-10-01 14:18:31.490 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:18:31 compute-0 nova_compute[260022]: 2025-10-01 14:18:31.490 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:18:31 compute-0 nova_compute[260022]: 2025-10-01 14:18:31.491 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:18:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:18:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1974965730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:18:31 compute-0 nova_compute[260022]: 2025-10-01 14:18:31.928 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.099 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.101 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4989MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.101 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.102 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:18:32 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1974965730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.413 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.500 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.501 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.502 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.555 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:18:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:18:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949324787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:18:32 compute-0 nova_compute[260022]: 2025-10-01 14:18:32.996 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:18:33 compute-0 nova_compute[260022]: 2025-10-01 14:18:33.002 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:18:33 compute-0 nova_compute[260022]: 2025-10-01 14:18:33.072 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:18:33 compute-0 nova_compute[260022]: 2025-10-01 14:18:33.074 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:18:33 compute-0 nova_compute[260022]: 2025-10-01 14:18:33.075 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:18:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:33 compute-0 ceph-mon[74802]: pgmap v2279: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:33 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1949324787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:18:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:35 compute-0 ceph-mon[74802]: pgmap v2280: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:37 compute-0 ceph-mon[74802]: pgmap v2281: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:38 compute-0 nova_compute[260022]: 2025-10-01 14:18:38.070 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:18:38 compute-0 nova_compute[260022]: 2025-10-01 14:18:38.071 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:18:38 compute-0 nova_compute[260022]: 2025-10-01 14:18:38.071 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:18:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:38 compute-0 nova_compute[260022]: 2025-10-01 14:18:38.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:18:39 compute-0 nova_compute[260022]: 2025-10-01 14:18:39.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:18:39 compute-0 ceph-mon[74802]: pgmap v2282: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:41 compute-0 ceph-mon[74802]: pgmap v2283: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:42 compute-0 nova_compute[260022]: 2025-10-01 14:18:42.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:18:42 compute-0 nova_compute[260022]: 2025-10-01 14:18:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:18:42 compute-0 nova_compute[260022]: 2025-10-01 14:18:42.347 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:18:42 compute-0 nova_compute[260022]: 2025-10-01 14:18:42.361 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:18:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:43 compute-0 ceph-mon[74802]: pgmap v2284: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:45 compute-0 nova_compute[260022]: 2025-10-01 14:18:45.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:18:45 compute-0 ceph-mon[74802]: pgmap v2285: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:46 compute-0 nova_compute[260022]: 2025-10-01 14:18:46.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:18:47 compute-0 ceph-mon[74802]: pgmap v2286: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:18:47
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['backups', 'volumes', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'vms']
Oct 01 14:18:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:18:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:18:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:18:49 compute-0 ceph-mon[74802]: pgmap v2287: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:51 compute-0 ceph-mon[74802]: pgmap v2288: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:52 compute-0 podman[315216]: 2025-10-01 14:18:52.512925234 +0000 UTC m=+0.063082574 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 14:18:52 compute-0 podman[315215]: 2025-10-01 14:18:52.517744858 +0000 UTC m=+0.065508703 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:18:52 compute-0 podman[315217]: 2025-10-01 14:18:52.538661861 +0000 UTC m=+0.076272213 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:18:52 compute-0 podman[315214]: 2025-10-01 14:18:52.550551659 +0000 UTC m=+0.100849444 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:18:52 compute-0 ceph-mon[74802]: pgmap v2289: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:54 compute-0 ceph-mon[74802]: pgmap v2290: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:18:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019926358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:18:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:18:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019926358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:18:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3019926358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:18:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3019926358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:18:57 compute-0 ceph-mon[74802]: pgmap v2291: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:18:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:18:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:18:59 compute-0 ceph-mon[74802]: pgmap v2292: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:18:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:01 compute-0 ceph-mon[74802]: pgmap v2293: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:03 compute-0 ceph-mon[74802]: pgmap v2294: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:05 compute-0 ceph-mon[74802]: pgmap v2295: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:07 compute-0 ceph-mon[74802]: pgmap v2296: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:09 compute-0 ceph-mon[74802]: pgmap v2297: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:10 compute-0 nova_compute[260022]: 2025-10-01 14:19:10.340 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:10 compute-0 ceph-mon[74802]: pgmap v2298: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:19:12.345 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:19:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:19:12.346 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:19:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:19:12.346 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:19:12 compute-0 ceph-mon[74802]: pgmap v2299: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:15 compute-0 ceph-mon[74802]: pgmap v2300: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:17 compute-0 ceph-mon[74802]: pgmap v2301: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:19:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:19:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:18 compute-0 sudo[315295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:19:18 compute-0 sudo[315295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:18 compute-0 sudo[315295]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:18 compute-0 sudo[315320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:19:19 compute-0 sudo[315320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:19 compute-0 sudo[315320]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:19 compute-0 sudo[315345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:19:19 compute-0 sudo[315345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:19 compute-0 sudo[315345]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:19 compute-0 sudo[315370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:19:19 compute-0 sudo[315370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:19 compute-0 ceph-mon[74802]: pgmap v2302: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:19 compute-0 sudo[315370]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:19:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:19:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:19:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:19:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:19:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:19:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 76926bb3-7b73-4fd6-bb74-67ad34fe2724 does not exist
Oct 01 14:19:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev db623ba0-484d-47cb-8773-37445ccdba3d does not exist
Oct 01 14:19:19 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 18ec3e0c-bcdb-45ea-b7d7-7be58a71bf1a does not exist
Oct 01 14:19:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:19:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:19:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:19:19 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:19:19 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:19:19 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:19:19 compute-0 sudo[315426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:19:19 compute-0 sudo[315426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:19 compute-0 sudo[315426]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:19 compute-0 sudo[315451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:19:19 compute-0 sudo[315451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:19 compute-0 sudo[315451]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:20 compute-0 sudo[315476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:19:20 compute-0 sudo[315476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:20 compute-0 sudo[315476]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:20 compute-0 sudo[315501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:19:20 compute-0 sudo[315501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:19:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:19:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:19:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:19:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:19:20 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:19:20 compute-0 podman[315568]: 2025-10-01 14:19:20.606116808 +0000 UTC m=+0.069238660 container create 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 14:19:20 compute-0 systemd[1]: Started libpod-conmon-38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085.scope.
Oct 01 14:19:20 compute-0 podman[315568]: 2025-10-01 14:19:20.579667248 +0000 UTC m=+0.042789150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:19:20 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:19:20 compute-0 podman[315568]: 2025-10-01 14:19:20.695643801 +0000 UTC m=+0.158765623 container init 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 14:19:20 compute-0 podman[315568]: 2025-10-01 14:19:20.707819328 +0000 UTC m=+0.170941150 container start 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:19:20 compute-0 podman[315568]: 2025-10-01 14:19:20.712503447 +0000 UTC m=+0.175625259 container attach 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:19:20 compute-0 wonderful_dirac[315584]: 167 167
Oct 01 14:19:20 compute-0 systemd[1]: libpod-38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085.scope: Deactivated successfully.
Oct 01 14:19:20 compute-0 conmon[315584]: conmon 38bbe72b486ac323c7f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085.scope/container/memory.events
Oct 01 14:19:20 compute-0 podman[315568]: 2025-10-01 14:19:20.715591315 +0000 UTC m=+0.178713137 container died 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f77f3c0002cc8c4a54a6dc0564c1972af193999cdf2186346d3fcd148b1a1ce1-merged.mount: Deactivated successfully.
Oct 01 14:19:20 compute-0 podman[315568]: 2025-10-01 14:19:20.766332967 +0000 UTC m=+0.229454809 container remove 38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:19:20 compute-0 systemd[1]: libpod-conmon-38bbe72b486ac323c7f1397019adee38290fb195836bdaa1582599e2beed0085.scope: Deactivated successfully.
Oct 01 14:19:20 compute-0 podman[315607]: 2025-10-01 14:19:20.988278866 +0000 UTC m=+0.041661814 container create 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:19:21 compute-0 systemd[1]: Started libpod-conmon-38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465.scope.
Oct 01 14:19:21 compute-0 podman[315607]: 2025-10-01 14:19:20.974624873 +0000 UTC m=+0.028007841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:19:21 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:21 compute-0 podman[315607]: 2025-10-01 14:19:21.090456381 +0000 UTC m=+0.143839339 container init 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 14:19:21 compute-0 podman[315607]: 2025-10-01 14:19:21.10209107 +0000 UTC m=+0.155474018 container start 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:19:21 compute-0 podman[315607]: 2025-10-01 14:19:21.10678509 +0000 UTC m=+0.160168038 container attach 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:19:21 compute-0 ceph-mon[74802]: pgmap v2303: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:22 compute-0 brave_curran[315623]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:19:22 compute-0 brave_curran[315623]: --> relative data size: 1.0
Oct 01 14:19:22 compute-0 brave_curran[315623]: --> All data devices are unavailable
Oct 01 14:19:22 compute-0 systemd[1]: libpod-38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465.scope: Deactivated successfully.
Oct 01 14:19:22 compute-0 systemd[1]: libpod-38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465.scope: Consumed 1.086s CPU time.
Oct 01 14:19:22 compute-0 podman[315607]: 2025-10-01 14:19:22.228141655 +0000 UTC m=+1.281524633 container died 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a5b87af07901e7f5814b75d4a3b51a19bedd046225514beb923ddc7194ac4c9-merged.mount: Deactivated successfully.
Oct 01 14:19:22 compute-0 podman[315607]: 2025-10-01 14:19:22.297915751 +0000 UTC m=+1.351298739 container remove 38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 01 14:19:22 compute-0 systemd[1]: libpod-conmon-38d99753b630db6663983f40f0f9c7d1fdd45ef1b9b6ceb92d94e2d3a1219465.scope: Deactivated successfully.
Oct 01 14:19:22 compute-0 sudo[315501]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:22 compute-0 sudo[315666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:19:22 compute-0 sudo[315666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:22 compute-0 sudo[315666]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:22 compute-0 sudo[315691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:19:22 compute-0 sudo[315691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:22 compute-0 sudo[315691]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:22 compute-0 sudo[315716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:19:22 compute-0 sudo[315716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:22 compute-0 sudo[315716]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:22 compute-0 sudo[315764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:19:22 compute-0 sudo[315764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:22 compute-0 podman[315741]: 2025-10-01 14:19:22.687711751 +0000 UTC m=+0.070287363 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20250923, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:19:22 compute-0 podman[315743]: 2025-10-01 14:19:22.687897587 +0000 UTC m=+0.071224193 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923)
Oct 01 14:19:22 compute-0 podman[315742]: 2025-10-01 14:19:22.709467742 +0000 UTC m=+0.093695196 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20250923, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 01 14:19:22 compute-0 podman[315740]: 2025-10-01 14:19:22.711669272 +0000 UTC m=+0.095309999 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=36bccb96575468ec919301205d8daa2c, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 01 14:19:23 compute-0 podman[315888]: 2025-10-01 14:19:23.038770481 +0000 UTC m=+0.061511355 container create 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct 01 14:19:23 compute-0 systemd[1]: Started libpod-conmon-37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476.scope.
Oct 01 14:19:23 compute-0 podman[315888]: 2025-10-01 14:19:23.007684384 +0000 UTC m=+0.030425318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:19:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:19:23 compute-0 podman[315888]: 2025-10-01 14:19:23.205137745 +0000 UTC m=+0.227878679 container init 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:19:23 compute-0 podman[315888]: 2025-10-01 14:19:23.216210486 +0000 UTC m=+0.238951330 container start 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:19:23 compute-0 romantic_chaplygin[315904]: 167 167
Oct 01 14:19:23 compute-0 systemd[1]: libpod-37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476.scope: Deactivated successfully.
Oct 01 14:19:23 compute-0 conmon[315904]: conmon 37768986a38727dc018a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476.scope/container/memory.events
Oct 01 14:19:23 compute-0 podman[315888]: 2025-10-01 14:19:23.226872065 +0000 UTC m=+0.249612969 container attach 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:19:23 compute-0 podman[315888]: 2025-10-01 14:19:23.229148878 +0000 UTC m=+0.251889732 container died 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:19:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-89c33736adf664b5573e997b7a65d1df2e092a9ceeadc73aa175c9b53fa83dbb-merged.mount: Deactivated successfully.
Oct 01 14:19:23 compute-0 podman[315888]: 2025-10-01 14:19:23.299823472 +0000 UTC m=+0.322564316 container remove 37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chaplygin, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:19:23 compute-0 systemd[1]: libpod-conmon-37768986a38727dc018a01e8a4d2943f8f679e7690a077d6d6eff9ae4d934476.scope: Deactivated successfully.
Oct 01 14:19:23 compute-0 ceph-mon[74802]: pgmap v2304: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:23 compute-0 podman[315931]: 2025-10-01 14:19:23.460786065 +0000 UTC m=+0.040730546 container create 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct 01 14:19:23 compute-0 systemd[1]: Started libpod-conmon-3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e.scope.
Oct 01 14:19:23 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:19:23 compute-0 podman[315931]: 2025-10-01 14:19:23.444651512 +0000 UTC m=+0.024596013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:23 compute-0 podman[315931]: 2025-10-01 14:19:23.557162485 +0000 UTC m=+0.137107016 container init 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 14:19:23 compute-0 podman[315931]: 2025-10-01 14:19:23.572026657 +0000 UTC m=+0.151971148 container start 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 14:19:23 compute-0 podman[315931]: 2025-10-01 14:19:23.575939832 +0000 UTC m=+0.155884423 container attach 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 14:19:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:24 compute-0 condescending_mayer[315948]: {
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:     "0": [
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:         {
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "devices": [
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "/dev/loop3"
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             ],
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_name": "ceph_lv0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_size": "21470642176",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "name": "ceph_lv0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "tags": {
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.cluster_name": "ceph",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.crush_device_class": "",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.encrypted": "0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.osd_id": "0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.type": "block",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.vdo": "0"
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             },
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "type": "block",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "vg_name": "ceph_vg0"
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:         }
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:     ],
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:     "1": [
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:         {
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "devices": [
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "/dev/loop4"
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             ],
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_name": "ceph_lv1",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_size": "21470642176",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "name": "ceph_lv1",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "tags": {
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.cluster_name": "ceph",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.crush_device_class": "",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.encrypted": "0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.osd_id": "1",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.type": "block",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.vdo": "0"
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             },
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "type": "block",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "vg_name": "ceph_vg1"
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:         }
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:     ],
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:     "2": [
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:         {
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "devices": [
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "/dev/loop5"
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             ],
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_name": "ceph_lv2",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_size": "21470642176",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "name": "ceph_lv2",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "tags": {
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.cluster_name": "ceph",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.crush_device_class": "",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.encrypted": "0",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.osd_id": "2",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.type": "block",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:                 "ceph.vdo": "0"
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             },
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "type": "block",
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:             "vg_name": "ceph_vg2"
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:         }
Oct 01 14:19:24 compute-0 condescending_mayer[315948]:     ]
Oct 01 14:19:24 compute-0 condescending_mayer[315948]: }
Oct 01 14:19:24 compute-0 systemd[1]: libpod-3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e.scope: Deactivated successfully.
Oct 01 14:19:24 compute-0 podman[315931]: 2025-10-01 14:19:24.385752591 +0000 UTC m=+0.965697093 container died 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 14:19:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba828791877e396b941db3fe5a04b0bda76a34cd163925f4eb707cd57694c87c-merged.mount: Deactivated successfully.
Oct 01 14:19:24 compute-0 podman[315931]: 2025-10-01 14:19:24.443563478 +0000 UTC m=+1.023507959 container remove 3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mayer, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:19:24 compute-0 systemd[1]: libpod-conmon-3cab1e35d2f50c801e642cae5e1079eaf5f48573e8026e7188386581a261bc9e.scope: Deactivated successfully.
Oct 01 14:19:24 compute-0 sudo[315764]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:24 compute-0 sudo[315971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:19:24 compute-0 sudo[315971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:24 compute-0 sudo[315971]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:24 compute-0 sudo[315996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:19:24 compute-0 sudo[315996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:24 compute-0 sudo[315996]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:24 compute-0 sudo[316021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:19:24 compute-0 sudo[316021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:24 compute-0 sudo[316021]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:24 compute-0 sudo[316046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:19:24 compute-0 sudo[316046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:25 compute-0 podman[316111]: 2025-10-01 14:19:25.239808607 +0000 UTC m=+0.069421256 container create 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:19:25 compute-0 systemd[1]: Started libpod-conmon-10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7.scope.
Oct 01 14:19:25 compute-0 podman[316111]: 2025-10-01 14:19:25.214686529 +0000 UTC m=+0.044299248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:19:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:19:25 compute-0 podman[316111]: 2025-10-01 14:19:25.325854881 +0000 UTC m=+0.155467610 container init 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 01 14:19:25 compute-0 podman[316111]: 2025-10-01 14:19:25.337805709 +0000 UTC m=+0.167418338 container start 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:19:25 compute-0 podman[316111]: 2025-10-01 14:19:25.342388636 +0000 UTC m=+0.172001315 container attach 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 01 14:19:25 compute-0 distracted_khorana[316127]: 167 167
Oct 01 14:19:25 compute-0 systemd[1]: libpod-10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7.scope: Deactivated successfully.
Oct 01 14:19:25 compute-0 podman[316111]: 2025-10-01 14:19:25.344845434 +0000 UTC m=+0.174458113 container died 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f610ea00ad54d50ae03def47842198f98c042a33cc1563103edd939d3f096b6-merged.mount: Deactivated successfully.
Oct 01 14:19:25 compute-0 podman[316111]: 2025-10-01 14:19:25.387309762 +0000 UTC m=+0.216922421 container remove 10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:19:25 compute-0 systemd[1]: libpod-conmon-10b1bb8a7aa8174bf1d3d3e172747f3e6aa37c858aad389ebac37b777b4e1cd7.scope: Deactivated successfully.
Oct 01 14:19:25 compute-0 ceph-mon[74802]: pgmap v2305: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:25 compute-0 podman[316150]: 2025-10-01 14:19:25.638753648 +0000 UTC m=+0.059521082 container create 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 14:19:25 compute-0 systemd[1]: Started libpod-conmon-2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51.scope.
Oct 01 14:19:25 compute-0 podman[316150]: 2025-10-01 14:19:25.618926008 +0000 UTC m=+0.039693552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:19:25 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:19:25 compute-0 podman[316150]: 2025-10-01 14:19:25.737321349 +0000 UTC m=+0.158088873 container init 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:19:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:25 compute-0 podman[316150]: 2025-10-01 14:19:25.753559654 +0000 UTC m=+0.174327148 container start 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:19:25 compute-0 podman[316150]: 2025-10-01 14:19:25.75753975 +0000 UTC m=+0.178307234 container attach 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]: {
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "osd_id": 0,
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "type": "bluestore"
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:     },
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "osd_id": 2,
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "type": "bluestore"
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:     },
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "osd_id": 1,
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:         "type": "bluestore"
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]:     }
Oct 01 14:19:26 compute-0 elastic_cartwright[316166]: }
Oct 01 14:19:26 compute-0 systemd[1]: libpod-2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51.scope: Deactivated successfully.
Oct 01 14:19:26 compute-0 podman[316150]: 2025-10-01 14:19:26.849861723 +0000 UTC m=+1.270629237 container died 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 01 14:19:26 compute-0 systemd[1]: libpod-2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51.scope: Consumed 1.103s CPU time.
Oct 01 14:19:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e83e363c7853e518b497d21fec6970fa3ad1dce2d6041f40558b9c3cd2a69aa9-merged.mount: Deactivated successfully.
Oct 01 14:19:26 compute-0 podman[316150]: 2025-10-01 14:19:26.928876993 +0000 UTC m=+1.349644477 container remove 2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 14:19:26 compute-0 systemd[1]: libpod-conmon-2c1a1206ba1904eb3fc165c0ec731da68f1fb99226e8bda8317ae9a25d4e4d51.scope: Deactivated successfully.
Oct 01 14:19:26 compute-0 sudo[316046]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:19:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:19:26 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:19:26 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:19:26 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2f972b3c-ed96-48de-86f1-09b9760b37b9 does not exist
Oct 01 14:19:26 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 95191127-b778-4538-b244-7fdfe0c91661 does not exist
Oct 01 14:19:27 compute-0 sudo[316212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:19:27 compute-0 sudo[316212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:27 compute-0 sudo[316212]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:27 compute-0 sudo[316237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:19:27 compute-0 sudo[316237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:19:27 compute-0 sudo[316237]: pam_unix(sudo:session): session closed for user root
Oct 01 14:19:27 compute-0 nova_compute[260022]: 2025-10-01 14:19:27.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:27 compute-0 ceph-mon[74802]: pgmap v2306: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:19:27 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:19:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.467848) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368467897, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1772, "num_deletes": 251, "total_data_size": 2909117, "memory_usage": 2957648, "flush_reason": "Manual Compaction"}
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368486688, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2859320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44903, "largest_seqno": 46674, "table_properties": {"data_size": 2851079, "index_size": 5055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16443, "raw_average_key_size": 19, "raw_value_size": 2834800, "raw_average_value_size": 3444, "num_data_blocks": 225, "num_entries": 823, "num_filter_entries": 823, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759328174, "oldest_key_time": 1759328174, "file_creation_time": 1759328368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 18943 microseconds, and 10566 cpu microseconds.
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.486789) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2859320 bytes OK
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.486811) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.488769) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.488783) EVENT_LOG_v1 {"time_micros": 1759328368488779, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.488801) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2901574, prev total WAL file size 2901574, number of live WAL files 2.
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.489687) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2792KB)], [107(6963KB)]
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368489712, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9990000, "oldest_snapshot_seqno": -1}
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6205 keys, 8225608 bytes, temperature: kUnknown
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368534272, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 8225608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8185813, "index_size": 23173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 161340, "raw_average_key_size": 26, "raw_value_size": 8075008, "raw_average_value_size": 1301, "num_data_blocks": 916, "num_entries": 6205, "num_filter_entries": 6205, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.534629) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 8225608 bytes
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.536389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.1 rd, 183.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 6.8 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 6719, records dropped: 514 output_compression: NoCompression
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.536405) EVENT_LOG_v1 {"time_micros": 1759328368536397, "job": 64, "event": "compaction_finished", "compaction_time_micros": 44771, "compaction_time_cpu_micros": 26260, "output_level": 6, "num_output_files": 1, "total_output_size": 8225608, "num_input_records": 6719, "num_output_records": 6205, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368537348, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328368539046, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.489628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:19:28 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:19:28.539162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:19:29 compute-0 ceph-mon[74802]: pgmap v2307: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:31 compute-0 nova_compute[260022]: 2025-10-01 14:19:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:31 compute-0 ceph-mon[74802]: pgmap v2308: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:31 compute-0 nova_compute[260022]: 2025-10-01 14:19:31.845 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:19:31 compute-0 nova_compute[260022]: 2025-10-01 14:19:31.845 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:19:31 compute-0 nova_compute[260022]: 2025-10-01 14:19:31.846 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:19:31 compute-0 nova_compute[260022]: 2025-10-01 14:19:31.846 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:19:31 compute-0 nova_compute[260022]: 2025-10-01 14:19:31.846 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:19:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:19:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557468855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.307 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:19:32 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/557468855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.503 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.505 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4978MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.505 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.505 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.643 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.660 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.661 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.661 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:19:32 compute-0 nova_compute[260022]: 2025-10-01 14:19:32.714 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:19:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:19:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3685988063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:19:33 compute-0 nova_compute[260022]: 2025-10-01 14:19:33.186 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:19:33 compute-0 nova_compute[260022]: 2025-10-01 14:19:33.194 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:19:33 compute-0 ceph-mon[74802]: pgmap v2309: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:33 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3685988063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:19:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:34 compute-0 nova_compute[260022]: 2025-10-01 14:19:34.628 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:19:34 compute-0 nova_compute[260022]: 2025-10-01 14:19:34.631 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:19:34 compute-0 nova_compute[260022]: 2025-10-01 14:19:34.632 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:19:35 compute-0 ceph-mon[74802]: pgmap v2310: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:37 compute-0 ceph-mon[74802]: pgmap v2311: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:38 compute-0 unix_chkpwd[316308]: password check failed for user (sshd)
Oct 01 14:19:38 compute-0 sshd-session[316306]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=185.156.73.233  user=sshd
Oct 01 14:19:39 compute-0 ceph-mon[74802]: pgmap v2312: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:39 compute-0 sshd-session[316306]: Failed password for sshd from 185.156.73.233 port 54500 ssh2
Oct 01 14:19:40 compute-0 nova_compute[260022]: 2025-10-01 14:19:40.627 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:40 compute-0 nova_compute[260022]: 2025-10-01 14:19:40.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:40 compute-0 nova_compute[260022]: 2025-10-01 14:19:40.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:40 compute-0 nova_compute[260022]: 2025-10-01 14:19:40.628 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:40 compute-0 nova_compute[260022]: 2025-10-01 14:19:40.628 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:19:41 compute-0 sshd-session[316306]: Connection closed by authenticating user sshd 185.156.73.233 port 54500 [preauth]
Oct 01 14:19:41 compute-0 ceph-mon[74802]: pgmap v2313: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:43 compute-0 ceph-mon[74802]: pgmap v2314: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:44 compute-0 nova_compute[260022]: 2025-10-01 14:19:44.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:44 compute-0 nova_compute[260022]: 2025-10-01 14:19:44.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:19:44 compute-0 nova_compute[260022]: 2025-10-01 14:19:44.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:19:44 compute-0 nova_compute[260022]: 2025-10-01 14:19:44.363 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:19:45 compute-0 nova_compute[260022]: 2025-10-01 14:19:45.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:45 compute-0 ceph-mon[74802]: pgmap v2315: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:47 compute-0 nova_compute[260022]: 2025-10-01 14:19:47.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:19:47 compute-0 ceph-mon[74802]: pgmap v2316: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:19:47
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups']
Oct 01 14:19:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:19:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:19:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:19:49 compute-0 ceph-mon[74802]: pgmap v2317: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:51 compute-0 ceph-mon[74802]: pgmap v2318: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:52 compute-0 ceph-mon[74802]: pgmap v2319: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:53 compute-0 podman[316310]: 2025-10-01 14:19:53.548635134 +0000 UTC m=+0.084732031 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Oct 01 14:19:53 compute-0 podman[316317]: 2025-10-01 14:19:53.560621925 +0000 UTC m=+0.087424298 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20250923, managed_by=edpm_ansible)
Oct 01 14:19:53 compute-0 podman[316311]: 2025-10-01 14:19:53.566516433 +0000 UTC m=+0.101653770 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 14:19:53 compute-0 podman[316309]: 2025-10-01 14:19:53.566687748 +0000 UTC m=+0.114609271 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct 01 14:19:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:54 compute-0 ceph-mon[74802]: pgmap v2320: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:19:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3165582044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:19:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:19:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3165582044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:19:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3165582044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:19:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/3165582044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:19:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:19:55 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 8418 writes, 30K keys, 8418 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8418 writes, 2178 syncs, 3.87 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 440 writes, 1113 keys, 440 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s
                                           Interval WAL: 440 writes, 206 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:19:56 compute-0 ceph-mon[74802]: pgmap v2321: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:19:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:19:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:19:58 compute-0 ceph-mon[74802]: pgmap v2322: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:19:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:20:00 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 9928 writes, 35K keys, 9928 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9928 writes, 2633 syncs, 3.77 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 502 writes, 1216 keys, 502 commit groups, 1.0 writes per commit group, ingest: 0.57 MB, 0.00 MB/s
                                           Interval WAL: 502 writes, 222 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:20:00 compute-0 ceph-mon[74802]: pgmap v2323: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:02 compute-0 ceph-mon[74802]: pgmap v2324: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:04 compute-0 ceph-mon[74802]: pgmap v2325: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:20:05 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 8971 writes, 31K keys, 8971 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8971 writes, 2398 syncs, 3.74 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 554 writes, 1220 keys, 554 commit groups, 1.0 writes per commit group, ingest: 0.58 MB, 0.00 MB/s
                                           Interval WAL: 554 writes, 253 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:20:06 compute-0 ceph-mon[74802]: pgmap v2326: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:07 compute-0 ceph-mgr[75103]: [devicehealth INFO root] Check health
Oct 01 14:20:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:08 compute-0 ceph-mon[74802]: pgmap v2327: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:10 compute-0 ceph-mon[74802]: pgmap v2328: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:20:12.347 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:20:12.347 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:20:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:20:12.347 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:20:12 compute-0 ceph-mon[74802]: pgmap v2329: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:15 compute-0 ceph-mon[74802]: pgmap v2330: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:17 compute-0 ceph-mon[74802]: pgmap v2331: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:20:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:20:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:19 compute-0 ceph-mon[74802]: pgmap v2332: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:21 compute-0 ceph-mon[74802]: pgmap v2333: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:23 compute-0 ceph-mon[74802]: pgmap v2334: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:24 compute-0 podman[316391]: 2025-10-01 14:20:24.530043059 +0000 UTC m=+0.077663158 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:20:24 compute-0 podman[316392]: 2025-10-01 14:20:24.537500365 +0000 UTC m=+0.081259922 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 01 14:20:24 compute-0 podman[316389]: 2025-10-01 14:20:24.558597386 +0000 UTC m=+0.117951638 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:20:24 compute-0 podman[316390]: 2025-10-01 14:20:24.570865955 +0000 UTC m=+0.127459199 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 01 14:20:25 compute-0 ceph-mon[74802]: pgmap v2335: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:27 compute-0 sudo[316471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:20:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Oct 01 14:20:27 compute-0 ceph-mon[74802]: pgmap v2336: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:27 compute-0 sudo[316471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:27 compute-0 sudo[316471]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Oct 01 14:20:27 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Oct 01 14:20:27 compute-0 sudo[316496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:20:27 compute-0 sudo[316496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:27 compute-0 sudo[316496]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:27 compute-0 nova_compute[260022]: 2025-10-01 14:20:27.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:27 compute-0 sudo[316521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:20:27 compute-0 sudo[316521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:27 compute-0 sudo[316521]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:27 compute-0 sudo[316546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:20:27 compute-0 sudo[316546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 21 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 511 B/s wr, 9 op/s
Oct 01 14:20:27 compute-0 sudo[316546]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:20:27 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:20:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:20:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:20:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:20:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:20:27 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 068ea4d9-7946-48c2-9899-c4136cbee931 does not exist
Oct 01 14:20:27 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 65c59bf9-32ec-479a-8ad9-ebd40a906f8f does not exist
Oct 01 14:20:27 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 2ec48a62-fd3a-4f1b-b410-722801d3ae01 does not exist
Oct 01 14:20:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:20:27 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:20:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:20:27 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:20:27 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:20:27 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:20:28 compute-0 sudo[316601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:20:28 compute-0 sudo[316601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:28 compute-0 sudo[316601]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:28 compute-0 sudo[316626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:20:28 compute-0 sudo[316626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:28 compute-0 sudo[316626]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:28 compute-0 ceph-mon[74802]: osdmap e198: 3 total, 3 up, 3 in
Oct 01 14:20:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:20:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:20:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:20:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:20:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:20:28 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:20:28 compute-0 sudo[316651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:20:28 compute-0 sudo[316651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:28 compute-0 sudo[316651]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:28 compute-0 sudo[316676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:20:28 compute-0 sudo[316676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:28 compute-0 podman[316741]: 2025-10-01 14:20:28.650973772 +0000 UTC m=+0.045702113 container create ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 01 14:20:28 compute-0 podman[316741]: 2025-10-01 14:20:28.627412693 +0000 UTC m=+0.022141034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:20:28 compute-0 systemd[1]: Started libpod-conmon-ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56.scope.
Oct 01 14:20:28 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:20:28 compute-0 podman[316741]: 2025-10-01 14:20:28.814107313 +0000 UTC m=+0.208835714 container init ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:20:28 compute-0 podman[316741]: 2025-10-01 14:20:28.82345995 +0000 UTC m=+0.218188261 container start ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 01 14:20:28 compute-0 mystifying_turing[316757]: 167 167
Oct 01 14:20:28 compute-0 systemd[1]: libpod-ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56.scope: Deactivated successfully.
Oct 01 14:20:28 compute-0 podman[316741]: 2025-10-01 14:20:28.847897727 +0000 UTC m=+0.242626038 container attach ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:20:28 compute-0 podman[316741]: 2025-10-01 14:20:28.848874767 +0000 UTC m=+0.243603098 container died ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:20:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-001acc981824d0ebd8d6e6e9b046284350bb075501c28076a055f073c2e157ed-merged.mount: Deactivated successfully.
Oct 01 14:20:29 compute-0 ceph-mon[74802]: pgmap v2338: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 21 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 511 B/s wr, 9 op/s
Oct 01 14:20:29 compute-0 podman[316741]: 2025-10-01 14:20:29.350053685 +0000 UTC m=+0.744782036 container remove ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:20:29 compute-0 systemd[1]: libpod-conmon-ee82972bdae48b261b0511e4f9467dfc438b4597da262b82e0f19a141177aa56.scope: Deactivated successfully.
Oct 01 14:20:29 compute-0 podman[316783]: 2025-10-01 14:20:29.636869325 +0000 UTC m=+0.102436365 container create 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 01 14:20:29 compute-0 podman[316783]: 2025-10-01 14:20:29.578548643 +0000 UTC m=+0.044115673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:20:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 21 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 511 B/s wr, 9 op/s
Oct 01 14:20:29 compute-0 systemd[1]: Started libpod-conmon-83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4.scope.
Oct 01 14:20:29 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:29 compute-0 podman[316783]: 2025-10-01 14:20:29.965914055 +0000 UTC m=+0.431481125 container init 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 14:20:29 compute-0 podman[316783]: 2025-10-01 14:20:29.97739375 +0000 UTC m=+0.442960790 container start 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:20:29 compute-0 podman[316783]: 2025-10-01 14:20:29.990167586 +0000 UTC m=+0.455734636 container attach 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:20:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Oct 01 14:20:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Oct 01 14:20:30 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Oct 01 14:20:31 compute-0 compassionate_black[316799]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:20:31 compute-0 compassionate_black[316799]: --> relative data size: 1.0
Oct 01 14:20:31 compute-0 compassionate_black[316799]: --> All data devices are unavailable
Oct 01 14:20:31 compute-0 systemd[1]: libpod-83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4.scope: Deactivated successfully.
Oct 01 14:20:31 compute-0 podman[316783]: 2025-10-01 14:20:31.238201364 +0000 UTC m=+1.703768414 container died 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 01 14:20:31 compute-0 systemd[1]: libpod-83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4.scope: Consumed 1.004s CPU time.
Oct 01 14:20:31 compute-0 nova_compute[260022]: 2025-10-01 14:20:31.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:31 compute-0 nova_compute[260022]: 2025-10-01 14:20:31.379 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:20:31 compute-0 nova_compute[260022]: 2025-10-01 14:20:31.379 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:20:31 compute-0 nova_compute[260022]: 2025-10-01 14:20:31.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:20:31 compute-0 nova_compute[260022]: 2025-10-01 14:20:31.380 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:20:31 compute-0 nova_compute[260022]: 2025-10-01 14:20:31.381 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:20:31 compute-0 ceph-mon[74802]: pgmap v2339: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 21 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 511 B/s wr, 9 op/s
Oct 01 14:20:31 compute-0 ceph-mon[74802]: osdmap e199: 3 total, 3 up, 3 in
Oct 01 14:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bb3710c7a89411b233bd880c50f1e669530f257627c9062ab57d2f9f720f1c4-merged.mount: Deactivated successfully.
Oct 01 14:20:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 21 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 639 B/s wr, 11 op/s
Oct 01 14:20:31 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:20:31 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3250697024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:20:31 compute-0 podman[316783]: 2025-10-01 14:20:31.893566229 +0000 UTC m=+2.359133269 container remove 83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_black, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 01 14:20:31 compute-0 nova_compute[260022]: 2025-10-01 14:20:31.901 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:20:31 compute-0 systemd[1]: libpod-conmon-83bbaabe0f35a127fbe8bad058fad240560c9be5eede4e80b1f1066a9276f9e4.scope: Deactivated successfully.
Oct 01 14:20:31 compute-0 sudo[316676]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:32 compute-0 sudo[316864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:20:32 compute-0 sudo[316864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:32 compute-0 sudo[316864]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.070 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.072 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5016MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.072 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.073 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:20:32 compute-0 sudo[316889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:20:32 compute-0 sudo[316889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:32 compute-0 sudo[316889]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:32 compute-0 sudo[316914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:20:32 compute-0 sudo[316914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:32 compute-0 sudo[316914]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.176 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.190 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.190 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.191 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:20:32 compute-0 sudo[316939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:20:32 compute-0 sudo[316939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.300 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:20:32 compute-0 podman[317024]: 2025-10-01 14:20:32.587645194 +0000 UTC m=+0.024500669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:20:32 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:20:32 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/665305703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:20:32 compute-0 podman[317024]: 2025-10-01 14:20:32.797896241 +0000 UTC m=+0.234751626 container create c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.820 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.826 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:20:32 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3250697024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.841 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.843 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:20:32 compute-0 nova_compute[260022]: 2025-10-01 14:20:32.844 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:20:33 compute-0 systemd[1]: Started libpod-conmon-c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0.scope.
Oct 01 14:20:33 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:20:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:33 compute-0 podman[317024]: 2025-10-01 14:20:33.158487604 +0000 UTC m=+0.595343089 container init c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct 01 14:20:33 compute-0 podman[317024]: 2025-10-01 14:20:33.170571528 +0000 UTC m=+0.607426953 container start c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 01 14:20:33 compute-0 modest_goldberg[317042]: 167 167
Oct 01 14:20:33 compute-0 systemd[1]: libpod-c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0.scope: Deactivated successfully.
Oct 01 14:20:33 compute-0 podman[317024]: 2025-10-01 14:20:33.190845911 +0000 UTC m=+0.627701306 container attach c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 01 14:20:33 compute-0 podman[317024]: 2025-10-01 14:20:33.192426332 +0000 UTC m=+0.629281747 container died c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 14:20:33 compute-0 nova_compute[260022]: 2025-10-01 14:20:33.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2f7bb72f0f87f91d28bbe8972218c2f14e9f6ce908fdf62d005724a8aedbd74-merged.mount: Deactivated successfully.
Oct 01 14:20:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.6 KiB/s wr, 53 op/s
Oct 01 14:20:33 compute-0 podman[317024]: 2025-10-01 14:20:33.955045812 +0000 UTC m=+1.391901237 container remove c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 01 14:20:33 compute-0 systemd[1]: libpod-conmon-c9aff7fadf38fe6b6035e07a9823cef34b29a4d1c59672d473a785394be044e0.scope: Deactivated successfully.
Oct 01 14:20:34 compute-0 ceph-mon[74802]: pgmap v2341: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 21 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 639 B/s wr, 11 op/s
Oct 01 14:20:34 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/665305703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:20:34 compute-0 podman[317066]: 2025-10-01 14:20:34.166010043 +0000 UTC m=+0.029063574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:20:34 compute-0 podman[317066]: 2025-10-01 14:20:34.375053142 +0000 UTC m=+0.238106663 container create 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct 01 14:20:34 compute-0 systemd[1]: Started libpod-conmon-29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a.scope.
Oct 01 14:20:34 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:34 compute-0 podman[317066]: 2025-10-01 14:20:34.816935758 +0000 UTC m=+0.679989319 container init 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 01 14:20:34 compute-0 podman[317066]: 2025-10-01 14:20:34.825827519 +0000 UTC m=+0.688881040 container start 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:20:34 compute-0 podman[317066]: 2025-10-01 14:20:34.927085255 +0000 UTC m=+0.790138826 container attach 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:20:35 compute-0 ceph-mon[74802]: pgmap v2342: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.6 KiB/s wr, 53 op/s
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]: {
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:     "0": [
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:         {
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "devices": [
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "/dev/loop3"
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             ],
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_name": "ceph_lv0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_size": "21470642176",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "name": "ceph_lv0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "tags": {
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.cluster_name": "ceph",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.crush_device_class": "",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.encrypted": "0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.osd_id": "0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.type": "block",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.vdo": "0"
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             },
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "type": "block",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "vg_name": "ceph_vg0"
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:         }
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:     ],
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:     "1": [
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:         {
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "devices": [
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "/dev/loop4"
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             ],
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_name": "ceph_lv1",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_size": "21470642176",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "name": "ceph_lv1",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "tags": {
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.cluster_name": "ceph",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.crush_device_class": "",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.encrypted": "0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.osd_id": "1",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.type": "block",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.vdo": "0"
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             },
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "type": "block",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "vg_name": "ceph_vg1"
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:         }
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:     ],
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:     "2": [
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:         {
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "devices": [
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "/dev/loop5"
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             ],
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_name": "ceph_lv2",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_size": "21470642176",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "name": "ceph_lv2",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "tags": {
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.cluster_name": "ceph",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.crush_device_class": "",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.encrypted": "0",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.osd_id": "2",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.type": "block",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:                 "ceph.vdo": "0"
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             },
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "type": "block",
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:             "vg_name": "ceph_vg2"
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:         }
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]:     ]
Oct 01 14:20:35 compute-0 stupefied_lehmann[317082]: }
Oct 01 14:20:35 compute-0 systemd[1]: libpod-29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a.scope: Deactivated successfully.
Oct 01 14:20:35 compute-0 podman[317066]: 2025-10-01 14:20:35.642537459 +0000 UTC m=+1.505590960 container died 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct 01 14:20:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 42 op/s
Oct 01 14:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-65abd5cfff8eeba302cf7e7534908486adaacc2dda6dc65e98b6942f570275c5-merged.mount: Deactivated successfully.
Oct 01 14:20:36 compute-0 podman[317066]: 2025-10-01 14:20:36.141249708 +0000 UTC m=+2.004303189 container remove 29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 01 14:20:36 compute-0 systemd[1]: libpod-conmon-29ed2cfa253de44b021458826684c6cdcbd704d5befec1bc4805d15d7c57a97a.scope: Deactivated successfully.
Oct 01 14:20:36 compute-0 sudo[316939]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:36 compute-0 sudo[317104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:20:36 compute-0 sudo[317104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:36 compute-0 sudo[317104]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:36 compute-0 sudo[317129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:20:36 compute-0 sudo[317129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:36 compute-0 sudo[317129]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:36 compute-0 sudo[317154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:20:36 compute-0 sudo[317154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:36 compute-0 sudo[317154]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:36 compute-0 sudo[317179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:20:36 compute-0 sudo[317179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:36 compute-0 podman[317246]: 2025-10-01 14:20:36.777089732 +0000 UTC m=+0.044292908 container create 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:20:36 compute-0 systemd[1]: Started libpod-conmon-5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43.scope.
Oct 01 14:20:36 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:20:36 compute-0 podman[317246]: 2025-10-01 14:20:36.756281482 +0000 UTC m=+0.023484688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:20:36 compute-0 podman[317246]: 2025-10-01 14:20:36.919255798 +0000 UTC m=+0.186458994 container init 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 14:20:36 compute-0 podman[317246]: 2025-10-01 14:20:36.925356511 +0000 UTC m=+0.192559717 container start 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:20:36 compute-0 sleepy_ishizaka[317262]: 167 167
Oct 01 14:20:36 compute-0 systemd[1]: libpod-5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43.scope: Deactivated successfully.
Oct 01 14:20:36 compute-0 podman[317246]: 2025-10-01 14:20:36.960163047 +0000 UTC m=+0.227366293 container attach 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:20:36 compute-0 podman[317246]: 2025-10-01 14:20:36.961139558 +0000 UTC m=+0.228342774 container died 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:20:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b16f77468f2f778b429aa8fbc18849475fcd6b45cd737a0fb7ad21f92aa088f7-merged.mount: Deactivated successfully.
Oct 01 14:20:37 compute-0 podman[317246]: 2025-10-01 14:20:37.389446692 +0000 UTC m=+0.656649918 container remove 5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ishizaka, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 14:20:37 compute-0 systemd[1]: libpod-conmon-5cd08952925859226a36f031ccadccc64074fd2ca957ca4a20a8aba0f2116a43.scope: Deactivated successfully.
Oct 01 14:20:37 compute-0 ceph-mon[74802]: pgmap v2343: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 42 op/s
Oct 01 14:20:37 compute-0 podman[317286]: 2025-10-01 14:20:37.660936765 +0000 UTC m=+0.067498305 container create 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:20:37 compute-0 systemd[1]: Started libpod-conmon-818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54.scope.
Oct 01 14:20:37 compute-0 podman[317286]: 2025-10-01 14:20:37.634227056 +0000 UTC m=+0.040788646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:20:37 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:20:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct 01 14:20:37 compute-0 podman[317286]: 2025-10-01 14:20:37.787176444 +0000 UTC m=+0.193738094 container init 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:20:37 compute-0 podman[317286]: 2025-10-01 14:20:37.802632895 +0000 UTC m=+0.209194475 container start 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 01 14:20:37 compute-0 podman[317286]: 2025-10-01 14:20:37.807561451 +0000 UTC m=+0.214123001 container attach 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:20:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Oct 01 14:20:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Oct 01 14:20:38 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Oct 01 14:20:38 compute-0 nova_compute[260022]: 2025-10-01 14:20:38.359 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:38 compute-0 nova_compute[260022]: 2025-10-01 14:20:38.361 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]: {
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "osd_id": 0,
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "type": "bluestore"
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:     },
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "osd_id": 2,
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "type": "bluestore"
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:     },
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "osd_id": 1,
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:         "type": "bluestore"
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]:     }
Oct 01 14:20:38 compute-0 musing_grothendieck[317302]: }
Oct 01 14:20:38 compute-0 systemd[1]: libpod-818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54.scope: Deactivated successfully.
Oct 01 14:20:38 compute-0 podman[317286]: 2025-10-01 14:20:38.896573919 +0000 UTC m=+1.303135449 container died 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct 01 14:20:38 compute-0 systemd[1]: libpod-818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54.scope: Consumed 1.098s CPU time.
Oct 01 14:20:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0ea3a81922f02c4151c8bc4e4d92f2ce21a42654a2aadbda73e5ddc4b61a68e-merged.mount: Deactivated successfully.
Oct 01 14:20:38 compute-0 podman[317286]: 2025-10-01 14:20:38.967951296 +0000 UTC m=+1.374512876 container remove 818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:20:38 compute-0 systemd[1]: libpod-conmon-818538c529d7add432ce676953c6226e3e8e67b7ed0d258d73a43bbd615a4f54.scope: Deactivated successfully.
Oct 01 14:20:39 compute-0 sudo[317179]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:20:39 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:20:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:20:39 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:20:39 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 24b49fd7-4591-40bd-a9b7-75ed0ad77501 does not exist
Oct 01 14:20:39 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev bfb405d2-899e-4cd8-b290-737ea42c04ac does not exist
Oct 01 14:20:39 compute-0 sudo[317349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:20:39 compute-0 sudo[317349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:39 compute-0 sudo[317349]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:39 compute-0 ceph-mon[74802]: pgmap v2344: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.3 KiB/s wr, 41 op/s
Oct 01 14:20:39 compute-0 ceph-mon[74802]: osdmap e200: 3 total, 3 up, 3 in
Oct 01 14:20:39 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:20:39 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:20:39 compute-0 sudo[317374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:20:39 compute-0 sudo[317374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:20:39 compute-0 sudo[317374]: pam_unix(sudo:session): session closed for user root
Oct 01 14:20:39 compute-0 nova_compute[260022]: 2025-10-01 14:20:39.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:39 compute-0 nova_compute[260022]: 2025-10-01 14:20:39.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:20:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.5 KiB/s wr, 45 op/s
Oct 01 14:20:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Oct 01 14:20:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Oct 01 14:20:40 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Oct 01 14:20:40 compute-0 nova_compute[260022]: 2025-10-01 14:20:40.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:41 compute-0 ceph-mon[74802]: pgmap v2346: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.5 KiB/s wr, 45 op/s
Oct 01 14:20:41 compute-0 ceph-mon[74802]: osdmap e201: 3 total, 3 up, 3 in
Oct 01 14:20:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 895 B/s wr, 9 op/s
Oct 01 14:20:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:43 compute-0 ceph-mon[74802]: pgmap v2348: 305 pgs: 305 active+clean; 456 KiB data, 216 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 895 B/s wr, 9 op/s
Oct 01 14:20:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Oct 01 14:20:45 compute-0 nova_compute[260022]: 2025-10-01 14:20:45.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:45 compute-0 nova_compute[260022]: 2025-10-01 14:20:45.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:20:45 compute-0 nova_compute[260022]: 2025-10-01 14:20:45.346 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:20:45 compute-0 ceph-mon[74802]: pgmap v2349: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Oct 01 14:20:45 compute-0 nova_compute[260022]: 2025-10-01 14:20:45.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:20:45 compute-0 nova_compute[260022]: 2025-10-01 14:20:45.365 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 01 14:20:47 compute-0 ceph-mon[74802]: pgmap v2350: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 2.1 MiB/s wr, 12 op/s
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:20:47
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control', 'vms']
Oct 01 14:20:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:20:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:20:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:20:49 compute-0 nova_compute[260022]: 2025-10-01 14:20:49.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:49 compute-0 ceph-mon[74802]: pgmap v2351: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 2.1 MiB/s wr, 12 op/s
Oct 01 14:20:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Oct 01 14:20:50 compute-0 nova_compute[260022]: 2025-10-01 14:20:50.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:50 compute-0 nova_compute[260022]: 2025-10-01 14:20:50.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 01 14:20:50 compute-0 nova_compute[260022]: 2025-10-01 14:20:50.385 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 01 14:20:51 compute-0 ceph-mon[74802]: pgmap v2352: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Oct 01 14:20:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.8 MiB/s wr, 10 op/s
Oct 01 14:20:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:53 compute-0 ceph-mon[74802]: pgmap v2353: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.8 MiB/s wr, 10 op/s
Oct 01 14:20:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Oct 01 14:20:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:20:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1674213025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:20:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:20:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1674213025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:20:55 compute-0 podman[317402]: 2025-10-01 14:20:55.508527108 +0000 UTC m=+0.060151152 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 01 14:20:55 compute-0 podman[317401]: 2025-10-01 14:20:55.510341725 +0000 UTC m=+0.065223053 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 01 14:20:55 compute-0 podman[317400]: 2025-10-01 14:20:55.511693157 +0000 UTC m=+0.067157884 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Oct 01 14:20:55 compute-0 ceph-mon[74802]: pgmap v2354: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 9 op/s
Oct 01 14:20:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1674213025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:20:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1674213025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:20:55 compute-0 podman[317399]: 2025-10-01 14:20:55.54295041 +0000 UTC m=+0.096365651 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 14:20:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:57 compute-0 nova_compute[260022]: 2025-10-01 14:20:57.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:20:57 compute-0 nova_compute[260022]: 2025-10-01 14:20:57.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 01 14:20:57 compute-0 ceph-mon[74802]: pgmap v2355: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033296094614833626 of space, bias 1.0, pg target 0.09988828384450088 quantized to 32 (current 32)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:20:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:20:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:20:59 compute-0 ceph-mon[74802]: pgmap v2356: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:20:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:01 compute-0 ceph-mon[74802]: pgmap v2357: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:03 compute-0 ceph-mon[74802]: pgmap v2358: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:05 compute-0 ceph-mon[74802]: pgmap v2359: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:07 compute-0 ceph-mon[74802]: pgmap v2360: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 14:21:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:09 compute-0 ceph-mon[74802]: pgmap v2361: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 14:21:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 14:21:11 compute-0 ceph-mon[74802]: pgmap v2362: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 14:21:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 14:21:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:21:12.348 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:21:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:21:12.348 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:21:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:21:12.348 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:21:12 compute-0 nova_compute[260022]: 2025-10-01 14:21:12.370 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:13 compute-0 ceph-mon[74802]: pgmap v2363: 305 pgs: 305 active+clean; 21 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct 01 14:21:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:21:14 compute-0 ceph-mon[74802]: pgmap v2364: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:21:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:21:17 compute-0 ceph-mon[74802]: pgmap v2365: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:21:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:21:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:21:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:19 compute-0 ceph-mon[74802]: pgmap v2366: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct 01 14:21:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 01 14:21:21 compute-0 ceph-mon[74802]: pgmap v2367: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 01 14:21:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 01 14:21:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:23 compute-0 ceph-mon[74802]: pgmap v2368: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 01 14:21:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 01 14:21:25 compute-0 ceph-mon[74802]: pgmap v2369: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 35 op/s
Oct 01 14:21:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:26 compute-0 podman[317486]: 2025-10-01 14:21:26.544029013 +0000 UTC m=+0.078132042 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 01 14:21:26 compute-0 podman[317484]: 2025-10-01 14:21:26.546723409 +0000 UTC m=+0.092161648 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 01 14:21:26 compute-0 podman[317485]: 2025-10-01 14:21:26.554524796 +0000 UTC m=+0.091994592 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 01 14:21:26 compute-0 podman[317483]: 2025-10-01 14:21:26.613627303 +0000 UTC m=+0.165724604 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 01 14:21:27 compute-0 ceph-mon[74802]: pgmap v2370: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:27 compute-0 nova_compute[260022]: 2025-10-01 14:21:27.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:29 compute-0 ceph-mon[74802]: pgmap v2371: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct 01 14:21:30 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct 01 14:21:30 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct 01 14:21:31 compute-0 ceph-mon[74802]: pgmap v2372: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:31 compute-0 ceph-mon[74802]: osdmap e202: 3 total, 3 up, 3 in
Oct 01 14:21:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:33 compute-0 ceph-mon[74802]: pgmap v2374: 305 pgs: 305 active+clean; 21 MiB data, 241 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:33 compute-0 nova_compute[260022]: 2025-10-01 14:21:33.346 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:33 compute-0 nova_compute[260022]: 2025-10-01 14:21:33.451 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:21:33 compute-0 nova_compute[260022]: 2025-10-01 14:21:33.451 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:21:33 compute-0 nova_compute[260022]: 2025-10-01 14:21:33.452 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:21:33 compute-0 nova_compute[260022]: 2025-10-01 14:21:33.452 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:21:33 compute-0 nova_compute[260022]: 2025-10-01 14:21:33.452 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:21:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 14:21:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:21:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3858678211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:21:33 compute-0 nova_compute[260022]: 2025-10-01 14:21:33.923 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.084 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.085 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5037MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.086 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.086 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:21:34 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3858678211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.419 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.432 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.433 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.433 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.483 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing inventories for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.499 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating ProviderTree inventory for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.499 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Updating inventory in ProviderTree for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.513 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing aggregate associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.533 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Refreshing trait associations for resource provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f, traits: HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 01 14:21:34 compute-0 nova_compute[260022]: 2025-10-01 14:21:34.580 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:21:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:21:34 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429763371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:21:35 compute-0 nova_compute[260022]: 2025-10-01 14:21:35.010 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:21:35 compute-0 nova_compute[260022]: 2025-10-01 14:21:35.017 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:21:35 compute-0 nova_compute[260022]: 2025-10-01 14:21:35.035 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:21:35 compute-0 nova_compute[260022]: 2025-10-01 14:21:35.038 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:21:35 compute-0 nova_compute[260022]: 2025-10-01 14:21:35.038 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:21:35 compute-0 ceph-mon[74802]: pgmap v2375: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 14:21:35 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1429763371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:21:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 14:21:37 compute-0 ceph-mon[74802]: pgmap v2376: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 14:21:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 14:21:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct 01 14:21:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct 01 14:21:38 compute-0 ceph-mon[74802]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct 01 14:21:39 compute-0 sudo[317611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:21:39 compute-0 sudo[317611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:39 compute-0 sudo[317611]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:39 compute-0 ceph-mon[74802]: pgmap v2377: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 14:21:39 compute-0 ceph-mon[74802]: osdmap e203: 3 total, 3 up, 3 in
Oct 01 14:21:39 compute-0 sudo[317636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:21:39 compute-0 sudo[317636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:39 compute-0 sudo[317636]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:39 compute-0 sudo[317661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:21:39 compute-0 sudo[317661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:39 compute-0 sudo[317661]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:39 compute-0 sudo[317686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:21:39 compute-0 sudo[317686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct 01 14:21:39 compute-0 sudo[317686]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:21:39 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:21:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:21:39 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:21:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:21:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:21:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9029208a-dee8-4081-99b1-21351f93bea9 does not exist
Oct 01 14:21:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 41b36492-6732-4eb0-adbf-9a19b9e81f27 does not exist
Oct 01 14:21:40 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 43047987-5d22-4629-973f-a2b492c745f2 does not exist
Oct 01 14:21:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:21:40 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:21:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:21:40 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:21:40 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:21:40 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:21:40 compute-0 sudo[317743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:21:40 compute-0 sudo[317743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:40 compute-0 sudo[317743]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:40 compute-0 sudo[317768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:21:40 compute-0 sudo[317768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:40 compute-0 sudo[317768]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:21:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:21:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:21:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:21:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:21:40 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:21:40 compute-0 sudo[317793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:21:40 compute-0 sudo[317793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:40 compute-0 sudo[317793]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:40 compute-0 sudo[317818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:21:40 compute-0 sudo[317818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:40 compute-0 podman[317885]: 2025-10-01 14:21:40.692672991 +0000 UTC m=+0.060521562 container create 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 14:21:40 compute-0 podman[317885]: 2025-10-01 14:21:40.651882666 +0000 UTC m=+0.019731247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:21:40 compute-0 systemd[1]: Started libpod-conmon-50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738.scope.
Oct 01 14:21:40 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:21:40 compute-0 podman[317885]: 2025-10-01 14:21:40.810766152 +0000 UTC m=+0.178614743 container init 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 14:21:40 compute-0 podman[317885]: 2025-10-01 14:21:40.819797589 +0000 UTC m=+0.187646200 container start 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 14:21:40 compute-0 pedantic_lederberg[317901]: 167 167
Oct 01 14:21:40 compute-0 systemd[1]: libpod-50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738.scope: Deactivated successfully.
Oct 01 14:21:40 compute-0 conmon[317901]: conmon 50281c53abcd58ba1087 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738.scope/container/memory.events
Oct 01 14:21:40 compute-0 podman[317885]: 2025-10-01 14:21:40.885533687 +0000 UTC m=+0.253382298 container attach 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 14:21:40 compute-0 podman[317885]: 2025-10-01 14:21:40.886303702 +0000 UTC m=+0.254152343 container died 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 01 14:21:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8921d78b4a09f3b523ff22f2ec2b42046f150808713db59e908070c2fe207687-merged.mount: Deactivated successfully.
Oct 01 14:21:41 compute-0 podman[317885]: 2025-10-01 14:21:41.016367622 +0000 UTC m=+0.384216193 container remove 50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:21:41 compute-0 systemd[1]: libpod-conmon-50281c53abcd58ba10877ff21f1888cdd3df004be7ef6114c0103d82baa88738.scope: Deactivated successfully.
Oct 01 14:21:41 compute-0 nova_compute[260022]: 2025-10-01 14:21:41.034 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:41 compute-0 nova_compute[260022]: 2025-10-01 14:21:41.036 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:41 compute-0 nova_compute[260022]: 2025-10-01 14:21:41.037 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:41 compute-0 nova_compute[260022]: 2025-10-01 14:21:41.037 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:21:41 compute-0 podman[317927]: 2025-10-01 14:21:41.20648785 +0000 UTC m=+0.052647633 container create 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct 01 14:21:41 compute-0 systemd[1]: Started libpod-conmon-8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e.scope.
Oct 01 14:21:41 compute-0 podman[317927]: 2025-10-01 14:21:41.183603504 +0000 UTC m=+0.029763337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:21:41 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:41 compute-0 podman[317927]: 2025-10-01 14:21:41.317140105 +0000 UTC m=+0.163299958 container init 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct 01 14:21:41 compute-0 podman[317927]: 2025-10-01 14:21:41.330274492 +0000 UTC m=+0.176434295 container start 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct 01 14:21:41 compute-0 podman[317927]: 2025-10-01 14:21:41.336764748 +0000 UTC m=+0.182924581 container attach 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 01 14:21:41 compute-0 ceph-mon[74802]: pgmap v2379: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Oct 01 14:21:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 14:21:42 compute-0 nova_compute[260022]: 2025-10-01 14:21:42.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:42 compute-0 admiring_cartwright[317943]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:21:42 compute-0 admiring_cartwright[317943]: --> relative data size: 1.0
Oct 01 14:21:42 compute-0 admiring_cartwright[317943]: --> All data devices are unavailable
Oct 01 14:21:42 compute-0 systemd[1]: libpod-8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e.scope: Deactivated successfully.
Oct 01 14:21:42 compute-0 systemd[1]: libpod-8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e.scope: Consumed 1.061s CPU time.
Oct 01 14:21:42 compute-0 podman[317972]: 2025-10-01 14:21:42.496969456 +0000 UTC m=+0.039010820 container died 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 14:21:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0520122b087dc0fb8789d2701505a06f38cf2939a13bea03824f5c183166441e-merged.mount: Deactivated successfully.
Oct 01 14:21:42 compute-0 podman[317972]: 2025-10-01 14:21:42.56477168 +0000 UTC m=+0.106813034 container remove 8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:21:42 compute-0 systemd[1]: libpod-conmon-8a47ebfb2010f7db830dacf36b33c0f8b9b6a7dd9ee7d8eb0f2902b0d957a95e.scope: Deactivated successfully.
Oct 01 14:21:42 compute-0 sudo[317818]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:42 compute-0 sudo[317987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:21:42 compute-0 sudo[317987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:42 compute-0 sudo[317987]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:42 compute-0 sudo[318012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:21:42 compute-0 sudo[318012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:42 compute-0 sudo[318012]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:42 compute-0 sudo[318037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:21:42 compute-0 sudo[318037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:42 compute-0 sudo[318037]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:42 compute-0 sudo[318062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:21:42 compute-0 sudo[318062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:43 compute-0 podman[318129]: 2025-10-01 14:21:43.243294451 +0000 UTC m=+0.045270099 container create ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:21:43 compute-0 systemd[1]: Started libpod-conmon-ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9.scope.
Oct 01 14:21:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:21:43 compute-0 podman[318129]: 2025-10-01 14:21:43.21618467 +0000 UTC m=+0.018160338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:21:43 compute-0 podman[318129]: 2025-10-01 14:21:43.345801486 +0000 UTC m=+0.147777144 container init ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 01 14:21:43 compute-0 podman[318129]: 2025-10-01 14:21:43.356780045 +0000 UTC m=+0.158755733 container start ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 14:21:43 compute-0 cranky_haslett[318145]: 167 167
Oct 01 14:21:43 compute-0 systemd[1]: libpod-ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9.scope: Deactivated successfully.
Oct 01 14:21:43 compute-0 podman[318129]: 2025-10-01 14:21:43.371059418 +0000 UTC m=+0.173035066 container attach ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 01 14:21:43 compute-0 podman[318129]: 2025-10-01 14:21:43.371641377 +0000 UTC m=+0.173617075 container died ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 01 14:21:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-948fe2240f30d4efa527d546569ae693c8f392130033dab0c3c7b197708247b3-merged.mount: Deactivated successfully.
Oct 01 14:21:43 compute-0 podman[318129]: 2025-10-01 14:21:43.488585581 +0000 UTC m=+0.290561269 container remove ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:21:43 compute-0 systemd[1]: libpod-conmon-ace20cc15548dcf08dcc63768cd166bcfa1860b5d8d1bc9f5eee7004c1912ef9.scope: Deactivated successfully.
Oct 01 14:21:43 compute-0 ceph-mon[74802]: pgmap v2380: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct 01 14:21:43 compute-0 nova_compute[260022]: 2025-10-01 14:21:43.732 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:43 compute-0 podman[318173]: 2025-10-01 14:21:43.764622008 +0000 UTC m=+0.084544696 container create 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 14:21:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:43 compute-0 podman[318173]: 2025-10-01 14:21:43.725093283 +0000 UTC m=+0.045016011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:21:43 compute-0 systemd[1]: Started libpod-conmon-0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1.scope.
Oct 01 14:21:43 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:43 compute-0 podman[318173]: 2025-10-01 14:21:43.931379034 +0000 UTC m=+0.251301742 container init 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 14:21:43 compute-0 podman[318173]: 2025-10-01 14:21:43.942994283 +0000 UTC m=+0.262916971 container start 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:21:43 compute-0 podman[318173]: 2025-10-01 14:21:43.964279719 +0000 UTC m=+0.284202437 container attach 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct 01 14:21:44 compute-0 angry_goodall[318190]: {
Oct 01 14:21:44 compute-0 angry_goodall[318190]:     "0": [
Oct 01 14:21:44 compute-0 angry_goodall[318190]:         {
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "devices": [
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "/dev/loop3"
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             ],
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_name": "ceph_lv0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_size": "21470642176",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "name": "ceph_lv0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "tags": {
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.cluster_name": "ceph",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.crush_device_class": "",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.encrypted": "0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.osd_id": "0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.type": "block",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.vdo": "0"
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             },
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "type": "block",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "vg_name": "ceph_vg0"
Oct 01 14:21:44 compute-0 angry_goodall[318190]:         }
Oct 01 14:21:44 compute-0 angry_goodall[318190]:     ],
Oct 01 14:21:44 compute-0 angry_goodall[318190]:     "1": [
Oct 01 14:21:44 compute-0 angry_goodall[318190]:         {
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "devices": [
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "/dev/loop4"
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             ],
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_name": "ceph_lv1",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_size": "21470642176",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "name": "ceph_lv1",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "tags": {
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.cluster_name": "ceph",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.crush_device_class": "",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.encrypted": "0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.osd_id": "1",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.type": "block",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.vdo": "0"
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             },
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "type": "block",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "vg_name": "ceph_vg1"
Oct 01 14:21:44 compute-0 angry_goodall[318190]:         }
Oct 01 14:21:44 compute-0 angry_goodall[318190]:     ],
Oct 01 14:21:44 compute-0 angry_goodall[318190]:     "2": [
Oct 01 14:21:44 compute-0 angry_goodall[318190]:         {
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "devices": [
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "/dev/loop5"
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             ],
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_name": "ceph_lv2",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_size": "21470642176",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "name": "ceph_lv2",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "tags": {
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.cluster_name": "ceph",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.crush_device_class": "",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.encrypted": "0",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.osd_id": "2",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.type": "block",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:                 "ceph.vdo": "0"
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             },
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "type": "block",
Oct 01 14:21:44 compute-0 angry_goodall[318190]:             "vg_name": "ceph_vg2"
Oct 01 14:21:44 compute-0 angry_goodall[318190]:         }
Oct 01 14:21:44 compute-0 angry_goodall[318190]:     ]
Oct 01 14:21:44 compute-0 angry_goodall[318190]: }
Oct 01 14:21:44 compute-0 systemd[1]: libpod-0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1.scope: Deactivated successfully.
Oct 01 14:21:44 compute-0 podman[318173]: 2025-10-01 14:21:44.720790456 +0000 UTC m=+1.040713174 container died 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 01 14:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2bc87d47ae21f28e1cc6edf9f6589a93713adc79164c37c61bdb0afb3823b4e-merged.mount: Deactivated successfully.
Oct 01 14:21:44 compute-0 podman[318173]: 2025-10-01 14:21:44.778717546 +0000 UTC m=+1.098640234 container remove 0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goodall, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:21:44 compute-0 systemd[1]: libpod-conmon-0f58d4901191088884218cd870481bac63c65c58bb21538201f9a273d8ff33f1.scope: Deactivated successfully.
Oct 01 14:21:44 compute-0 sudo[318062]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:44 compute-0 sudo[318213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:21:44 compute-0 sudo[318213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:44 compute-0 sudo[318213]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:44 compute-0 sudo[318238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:21:44 compute-0 sudo[318238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:44 compute-0 sudo[318238]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:45 compute-0 sudo[318263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:21:45 compute-0 sudo[318263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:45 compute-0 sudo[318263]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:45 compute-0 sudo[318288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:21:45 compute-0 sudo[318288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:45 compute-0 nova_compute[260022]: 2025-10-01 14:21:45.362 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:45 compute-0 nova_compute[260022]: 2025-10-01 14:21:45.363 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:21:45 compute-0 nova_compute[260022]: 2025-10-01 14:21:45.364 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:21:45 compute-0 nova_compute[260022]: 2025-10-01 14:21:45.381 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:21:45 compute-0 nova_compute[260022]: 2025-10-01 14:21:45.382 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:45 compute-0 ceph-mon[74802]: pgmap v2381: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:45 compute-0 podman[318351]: 2025-10-01 14:21:45.611134944 +0000 UTC m=+0.083722590 container create 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 01 14:21:45 compute-0 systemd[1]: Started libpod-conmon-7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910.scope.
Oct 01 14:21:45 compute-0 podman[318351]: 2025-10-01 14:21:45.567999894 +0000 UTC m=+0.040587630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:21:45 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:21:45 compute-0 podman[318351]: 2025-10-01 14:21:45.6928704 +0000 UTC m=+0.165458076 container init 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 01 14:21:45 compute-0 podman[318351]: 2025-10-01 14:21:45.703744856 +0000 UTC m=+0.176332492 container start 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:21:45 compute-0 upbeat_goodall[318367]: 167 167
Oct 01 14:21:45 compute-0 systemd[1]: libpod-7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910.scope: Deactivated successfully.
Oct 01 14:21:45 compute-0 podman[318351]: 2025-10-01 14:21:45.707307069 +0000 UTC m=+0.179894795 container attach 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:21:45 compute-0 podman[318351]: 2025-10-01 14:21:45.711106879 +0000 UTC m=+0.183694605 container died 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 01 14:21:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-229a64058ece69e19958f83daf55e906807f891585f2b01390634532cd62af81-merged.mount: Deactivated successfully.
Oct 01 14:21:45 compute-0 podman[318351]: 2025-10-01 14:21:45.760427426 +0000 UTC m=+0.233015072 container remove 7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:21:45 compute-0 systemd[1]: libpod-conmon-7ebd7d755da5578772a0eae97da81afbbc0dbf1a89c7f9bcf4c510b4df6de910.scope: Deactivated successfully.
Oct 01 14:21:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:45 compute-0 podman[318390]: 2025-10-01 14:21:45.97530107 +0000 UTC m=+0.037895044 container create 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 14:21:46 compute-0 systemd[1]: Started libpod-conmon-1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d.scope.
Oct 01 14:21:46 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:21:46 compute-0 podman[318390]: 2025-10-01 14:21:45.959762727 +0000 UTC m=+0.022356721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:21:46 compute-0 podman[318390]: 2025-10-01 14:21:46.058947207 +0000 UTC m=+0.121541231 container init 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 01 14:21:46 compute-0 podman[318390]: 2025-10-01 14:21:46.072195137 +0000 UTC m=+0.134789151 container start 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:21:46 compute-0 podman[318390]: 2025-10-01 14:21:46.076637619 +0000 UTC m=+0.139231633 container attach 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 14:21:47 compute-0 cool_nobel[318407]: {
Oct 01 14:21:47 compute-0 cool_nobel[318407]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "osd_id": 0,
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "type": "bluestore"
Oct 01 14:21:47 compute-0 cool_nobel[318407]:     },
Oct 01 14:21:47 compute-0 cool_nobel[318407]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "osd_id": 2,
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "type": "bluestore"
Oct 01 14:21:47 compute-0 cool_nobel[318407]:     },
Oct 01 14:21:47 compute-0 cool_nobel[318407]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "osd_id": 1,
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:21:47 compute-0 cool_nobel[318407]:         "type": "bluestore"
Oct 01 14:21:47 compute-0 cool_nobel[318407]:     }
Oct 01 14:21:47 compute-0 cool_nobel[318407]: }
Oct 01 14:21:47 compute-0 systemd[1]: libpod-1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d.scope: Deactivated successfully.
Oct 01 14:21:47 compute-0 podman[318390]: 2025-10-01 14:21:47.021663073 +0000 UTC m=+1.084257047 container died 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct 01 14:21:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c519cc7702f7940df9cfe62c00551c03f9382f4a994896f93b0a6823b0fecc15-merged.mount: Deactivated successfully.
Oct 01 14:21:47 compute-0 podman[318390]: 2025-10-01 14:21:47.073772339 +0000 UTC m=+1.136366353 container remove 1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:21:47 compute-0 systemd[1]: libpod-conmon-1645f8b4ab0657ce78f8b3522a49ba042bf2725723efe1a473296692e2a8769d.scope: Deactivated successfully.
Oct 01 14:21:47 compute-0 sudo[318288]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:21:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:21:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:21:47 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9849c233-6ed0-4ae1-9295-db46be10ea84 does not exist
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev f5104160-af60-4ab3-8971-e3f64f76b835 does not exist
Oct 01 14:21:47 compute-0 sudo[318453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:21:47 compute-0 sudo[318453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:47 compute-0 sudo[318453]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:47 compute-0 sudo[318478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:21:47 compute-0 sudo[318478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:21:47 compute-0 sudo[318478]: pam_unix(sudo:session): session closed for user root
Oct 01 14:21:47 compute-0 ceph-mon[74802]: pgmap v2382: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:21:47 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:21:47
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr']
Oct 01 14:21:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:21:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:21:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:21:49 compute-0 nova_compute[260022]: 2025-10-01 14:21:49.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:21:49 compute-0 ceph-mon[74802]: pgmap v2383: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:51 compute-0 ceph-mon[74802]: pgmap v2384: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:53 compute-0 ceph-mon[74802]: pgmap v2385: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:21:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1455288149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:21:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:21:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1455288149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:21:55 compute-0 ceph-mon[74802]: pgmap v2386: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1455288149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:21:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1455288149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:21:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:57 compute-0 podman[318505]: 2025-10-01 14:21:57.504676801 +0000 UTC m=+0.060312997 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:21:57 compute-0 podman[318504]: 2025-10-01 14:21:57.504786085 +0000 UTC m=+0.063846690 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct 01 14:21:57 compute-0 podman[318506]: 2025-10-01 14:21:57.504833606 +0000 UTC m=+0.056027930 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 01 14:21:57 compute-0 podman[318503]: 2025-10-01 14:21:57.529527171 +0000 UTC m=+0.088485532 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 01 14:21:57 compute-0 ceph-mon[74802]: pgmap v2387: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:21:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:21:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:21:59 compute-0 ceph-mon[74802]: pgmap v2388: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:21:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:01 compute-0 ceph-mon[74802]: pgmap v2389: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:03 compute-0 ceph-mon[74802]: pgmap v2390: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:05 compute-0 ceph-mon[74802]: pgmap v2391: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:07 compute-0 ceph-mon[74802]: pgmap v2392: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:07 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:08 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:09 compute-0 ceph-mon[74802]: pgmap v2393: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:09 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:10 compute-0 ceph-mon[74802]: pgmap v2394: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:11 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:22:12.348 161890 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:22:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:22:12.349 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:22:12 compute-0 ovn_metadata_agent[161885]: 2025-10-01 14:22:12.349 161890 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:22:12 compute-0 ceph-mon[74802]: pgmap v2395: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:13 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:13 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:14 compute-0 ceph-mon[74802]: pgmap v2396: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:15 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:16 compute-0 ceph-mon[74802]: pgmap v2397: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:17 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:22:17 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:22:18 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:18 compute-0 ceph-mon[74802]: pgmap v2398: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:19 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:20 compute-0 ceph-mon[74802]: pgmap v2399: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:21 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:22 compute-0 ceph-mon[74802]: pgmap v2400: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:23 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:23 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:24 compute-0 ceph-mon[74802]: pgmap v2401: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:25 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:26 compute-0 ceph-mon[74802]: pgmap v2402: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:27 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:28 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:28 compute-0 nova_compute[260022]: 2025-10-01 14:22:28.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:22:28 compute-0 podman[318581]: 2025-10-01 14:22:28.509510644 +0000 UTC m=+0.061044699 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20250923, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:22:28 compute-0 podman[318583]: 2025-10-01 14:22:28.529853461 +0000 UTC m=+0.067044351 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20250923, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c)
Oct 01 14:22:28 compute-0 podman[318582]: 2025-10-01 14:22:28.542489332 +0000 UTC m=+0.081166709 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 01 14:22:28 compute-0 podman[318580]: 2025-10-01 14:22:28.574671974 +0000 UTC m=+0.124142324 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=36bccb96575468ec919301205d8daa2c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 01 14:22:28 compute-0 ceph-mon[74802]: pgmap v2403: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:29 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:31 compute-0 ceph-mon[74802]: pgmap v2404: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:31 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:33 compute-0 ceph-mon[74802]: pgmap v2405: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:33 compute-0 nova_compute[260022]: 2025-10-01 14:22:33.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:22:33 compute-0 nova_compute[260022]: 2025-10-01 14:22:33.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:22:33 compute-0 nova_compute[260022]: 2025-10-01 14:22:33.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:22:33 compute-0 nova_compute[260022]: 2025-10-01 14:22:33.380 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:22:33 compute-0 nova_compute[260022]: 2025-10-01 14:22:33.381 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 01 14:22:33 compute-0 nova_compute[260022]: 2025-10-01 14:22:33.381 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:22:33 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:22:33 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300388876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:22:33 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:33 compute-0 nova_compute[260022]: 2025-10-01 14:22:33.846 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:22:34 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1300388876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.030 2 WARNING nova.virt.libvirt.driver [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.032 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5012MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.032 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.033 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.115 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 3d91db2b-812b-47ea-a0f8-0384b4c68597 has allocations against this compute host but is not found in the database.
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.130 2 INFO nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Instance 84424f30-80d3-425e-b60f-86809ad3c076 has allocations against this compute host but is not found in the database.
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.131 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.434 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 01 14:22:34 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 01 14:22:34 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946375056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.850 2 DEBUG oslo_concurrency.processutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.855 2 DEBUG nova.compute.provider_tree [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed in ProviderTree for provider: c1b9017d-7e6f-44ea-9ee2-bc19313d736f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.874 2 DEBUG nova.scheduler.client.report [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Inventory has not changed for provider c1b9017d-7e6f-44ea-9ee2-bc19313d736f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.876 2 DEBUG nova.compute.resource_tracker [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 01 14:22:34 compute-0 nova_compute[260022]: 2025-10-01 14:22:34.876 2 DEBUG oslo_concurrency.lockutils [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 01 14:22:34 compute-0 sshd-session[318699]: Accepted publickey for zuul from 192.168.122.10 port 43946 ssh2: ECDSA SHA256:z+vj4+gIVPzQCIxMnEovOWyIJAuUg9VKFZBWkIrP8eg
Oct 01 14:22:34 compute-0 systemd-logind[818]: New session 56 of user zuul.
Oct 01 14:22:34 compute-0 systemd[1]: Started Session 56 of User zuul.
Oct 01 14:22:34 compute-0 sshd-session[318699]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 01 14:22:35 compute-0 ceph-mon[74802]: pgmap v2406: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:35 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2946375056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 01 14:22:35 compute-0 sudo[318703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 01 14:22:35 compute-0 sudo[318703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 01 14:22:35 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:37 compute-0 ceph-mon[74802]: pgmap v2407: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:37 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:38 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15119 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:38 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.218249) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558218311, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1786, "num_deletes": 260, "total_data_size": 2875522, "memory_usage": 2910992, "flush_reason": "Manual Compaction"}
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558265313, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2836157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46675, "largest_seqno": 48460, "table_properties": {"data_size": 2827808, "index_size": 5163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16713, "raw_average_key_size": 19, "raw_value_size": 2811159, "raw_average_value_size": 3358, "num_data_blocks": 229, "num_entries": 837, "num_filter_entries": 837, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759328369, "oldest_key_time": 1759328369, "file_creation_time": 1759328558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 47147 microseconds, and 7402 cpu microseconds.
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.265401) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2836157 bytes OK
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.265474) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.268611) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.268637) EVENT_LOG_v1 {"time_micros": 1759328558268629, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.268661) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2867872, prev total WAL file size 2867872, number of live WAL files 2.
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.270067) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373631' seq:72057594037927935, type:22 .. '6C6F676D0032303133' seq:0, type:0; will stop at (end)
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2769KB)], [110(8032KB)]
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558270138, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11061765, "oldest_snapshot_seqno": -1}
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6508 keys, 10960929 bytes, temperature: kUnknown
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558340518, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 10960929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10915572, "index_size": 27967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 168519, "raw_average_key_size": 25, "raw_value_size": 10795860, "raw_average_value_size": 1658, "num_data_blocks": 1120, "num_entries": 6508, "num_filter_entries": 6508, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.340931) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 10960929 bytes
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.342810) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.9 rd, 155.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(7.8) write-amplify(3.9) OK, records in: 7042, records dropped: 534 output_compression: NoCompression
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.342848) EVENT_LOG_v1 {"time_micros": 1759328558342831, "job": 66, "event": "compaction_finished", "compaction_time_micros": 70484, "compaction_time_cpu_micros": 26552, "output_level": 6, "num_output_files": 1, "total_output_size": 10960929, "num_input_records": 7042, "num_output_records": 6508, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558344173, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328558347417, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.269980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:22:38 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:22:38.347574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:22:38 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15121 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:39 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 01 14:22:39 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1762045296' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 14:22:39 compute-0 ceph-mon[74802]: pgmap v2408: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:39 compute-0 ceph-mon[74802]: from='client.15119 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:39 compute-0 ceph-mon[74802]: from='client.15121 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:39 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1762045296' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 01 14:22:39 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:40 compute-0 nova_compute[260022]: 2025-10-01 14:22:40.877 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:22:40 compute-0 nova_compute[260022]: 2025-10-01 14:22:40.878 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:22:40 compute-0 nova_compute[260022]: 2025-10-01 14:22:40.878 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:22:40 compute-0 nova_compute[260022]: 2025-10-01 14:22:40.878 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 01 14:22:41 compute-0 ceph-mon[74802]: pgmap v2409: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:41 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:42 compute-0 ovs-vsctl[318988]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 01 14:22:43 compute-0 virtqemud[260323]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 01 14:22:43 compute-0 virtqemud[260323]: hostname: compute-0
Oct 01 14:22:43 compute-0 virtqemud[260323]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 01 14:22:43 compute-0 virtqemud[260323]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 01 14:22:43 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:43 compute-0 virtqemud[260323]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 01 14:22:43 compute-0 ceph-mon[74802]: pgmap v2410: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:43 compute-0 nova_compute[260022]: 2025-10-01 14:22:43.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:22:43 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:43 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: cache status {prefix=cache status} (starting...)
Oct 01 14:22:43 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: client ls {prefix=client ls} (starting...)
Oct 01 14:22:44 compute-0 lvm[319329]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct 01 14:22:44 compute-0 lvm[319329]: VG ceph_vg2 finished
Oct 01 14:22:44 compute-0 lvm[319353]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 01 14:22:44 compute-0 lvm[319353]: VG ceph_vg0 finished
Oct 01 14:22:44 compute-0 lvm[319359]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct 01 14:22:44 compute-0 lvm[319359]: VG ceph_vg1 finished
Oct 01 14:22:44 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15125 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:44 compute-0 kernel: block loop5: the capability attribute has been deprecated.
Oct 01 14:22:44 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: damage ls {prefix=damage ls} (starting...)
Oct 01 14:22:44 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15127 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:44 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump loads {prefix=dump loads} (starting...)
Oct 01 14:22:44 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 01 14:22:45 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 01 14:22:45 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 01 14:22:45 compute-0 ceph-mon[74802]: pgmap v2411: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:45 compute-0 ceph-mon[74802]: from='client.15125 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:45 compute-0 ceph-mon[74802]: from='client.15127 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:45 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 01 14:22:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 01 14:22:45 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203451135' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 01 14:22:45 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15133 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:45 compute-0 ceph-mgr[75103]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 14:22:45 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T14:22:45.519+0000 7f13b53e1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 14:22:45 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 01 14:22:45 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 01 14:22:45 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:22:45 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3268758393' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:22:45 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 01 14:22:45 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561900908' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 01 14:22:46 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: ops {prefix=ops} (starting...)
Oct 01 14:22:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 01 14:22:46 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239454090' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 01 14:22:46 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/203451135' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 01 14:22:46 compute-0 ceph-mon[74802]: from='client.15133 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:46 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3268758393' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:22:46 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2561900908' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 01 14:22:46 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/239454090' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 01 14:22:46 compute-0 nova_compute[260022]: 2025-10-01 14:22:46.344 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:22:46 compute-0 nova_compute[260022]: 2025-10-01 14:22:46.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 01 14:22:46 compute-0 nova_compute[260022]: 2025-10-01 14:22:46.345 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 01 14:22:46 compute-0 nova_compute[260022]: 2025-10-01 14:22:46.365 2 DEBUG nova.compute.manager [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 01 14:22:46 compute-0 nova_compute[260022]: 2025-10-01 14:22:46.365 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:22:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 01 14:22:46 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716181558' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 14:22:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 01 14:22:46 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288882502' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 01 14:22:46 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: session ls {prefix=session ls} (starting...)
Oct 01 14:22:46 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 01 14:22:46 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80145924' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 14:22:46 compute-0 ceph-mds[100898]: mds.cephfs.compute-0.vhkcbm asok_command: status {prefix=status} (starting...)
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 01 14:22:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443807167' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 14:22:47 compute-0 ceph-mon[74802]: pgmap v2412: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:47 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1716181558' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 14:22:47 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4288882502' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 01 14:22:47 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/80145924' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 14:22:47 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1443807167' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 14:22:47 compute-0 sudo[319780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:22:47 compute-0 sudo[319780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:47 compute-0 sudo[319780]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:47 compute-0 sudo[319811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:22:47 compute-0 sudo[319811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:47 compute-0 sudo[319811]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15151 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:47 compute-0 sudo[319851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:22:47 compute-0 sudo[319851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:47 compute-0 sudo[319851]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:47 compute-0 sudo[319881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 01 14:22:47 compute-0 sudo[319881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 14:22:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954319479' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] scanning for idle connections..
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [volumes INFO mgr_util] cleaning up connections: []
Oct 01 14:22:47 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 01 14:22:47 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285600281' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Optimize plan auto_2025-10-01_14:22:47
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] do_upmap
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] pools ['vms', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Oct 01 14:22:47 compute-0 ceph-mgr[75103]: [balancer INFO root] prepared 0/10 changes
Oct 01 14:22:48 compute-0 sudo[319881]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 01 14:22:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994494705' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:22:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct 01 14:22:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct 01 14:22:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 1df42182-d274-41f1-a5cc-eefc373f49eb does not exist
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev e2512331-12f2-4704-b1bc-b660b8ee3eac does not exist
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 3d452249-2531-46fc-940d-ea17360d0de5 does not exist
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct 01 14:22:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct 01 14:22:48 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 01 14:22:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:22:48 compute-0 sudo[320014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:22:48 compute-0 sudo[320014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:48 compute-0 sudo[320014]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:48 compute-0 sudo[320046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:22:48 compute-0 sudo[320046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:48 compute-0 sudo[320046]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='client.15147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='client.15151 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1954319479' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3285600281' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2994494705' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 01 14:22:48 compute-0 sudo[320091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:22:48 compute-0 sudo[320091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:48 compute-0 sudo[320091]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: [rbd_support INFO root] load_schedules: images, start_after=
Oct 01 14:22:48 compute-0 sudo[320117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Oct 01 14:22:48 compute-0 sudo[320117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 01 14:22:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540409583' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 01 14:22:48 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409442186' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 14:22:48 compute-0 podman[320230]: 2025-10-01 14:22:48.761856861 +0000 UTC m=+0.046834469 container create f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 01 14:22:48 compute-0 systemd[1]: Started libpod-conmon-f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0.scope.
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15163 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 01 14:22:48 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T14:22:48.830+0000 7f13b53e1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct 01 14:22:48 compute-0 podman[320230]: 2025-10-01 14:22:48.737184548 +0000 UTC m=+0.022162196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:22:48 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:22:48 compute-0 podman[320230]: 2025-10-01 14:22:48.866557966 +0000 UTC m=+0.151535584 container init f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:22:48 compute-0 podman[320230]: 2025-10-01 14:22:48.875475149 +0000 UTC m=+0.160452747 container start f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 01 14:22:48 compute-0 podman[320230]: 2025-10-01 14:22:48.879692354 +0000 UTC m=+0.164669972 container attach f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 01 14:22:48 compute-0 systemd[1]: libpod-f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0.scope: Deactivated successfully.
Oct 01 14:22:48 compute-0 optimistic_leakey[320245]: 167 167
Oct 01 14:22:48 compute-0 conmon[320245]: conmon f4f3f21461112af91aa8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0.scope/container/memory.events
Oct 01 14:22:48 compute-0 podman[320230]: 2025-10-01 14:22:48.884044481 +0000 UTC m=+0.169022129 container died f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 01 14:22:48 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15165 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a40b4b25b986483a24120b746ae56144c35c47eeebfd88ba8d80f76978c3fb7b-merged.mount: Deactivated successfully.
Oct 01 14:22:48 compute-0 podman[320230]: 2025-10-01 14:22:48.935720053 +0000 UTC m=+0.220697661 container remove f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 01 14:22:48 compute-0 systemd[1]: libpod-conmon-f4f3f21461112af91aa821a0e35e6a6edb2946ec681c8d5e6ee56be144a7dcc0.scope: Deactivated successfully.
Oct 01 14:22:49 compute-0 podman[320317]: 2025-10-01 14:22:49.121197613 +0000 UTC m=+0.049771961 container create 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct 01 14:22:49 compute-0 systemd[1]: Started libpod-conmon-2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a.scope.
Oct 01 14:22:49 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:49 compute-0 podman[320317]: 2025-10-01 14:22:49.104016498 +0000 UTC m=+0.032590866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:22:49 compute-0 podman[320317]: 2025-10-01 14:22:49.209510059 +0000 UTC m=+0.138084417 container init 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct 01 14:22:49 compute-0 podman[320317]: 2025-10-01 14:22:49.215142067 +0000 UTC m=+0.143716415 container start 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct 01 14:22:49 compute-0 podman[320317]: 2025-10-01 14:22:49.218084191 +0000 UTC m=+0.146658539 container attach 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 01 14:22:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 01 14:22:49 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1935810156' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 01 14:22:49 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15169 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:49 compute-0 ceph-mon[74802]: pgmap v2413: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:49 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1540409583' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 01 14:22:49 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2409442186' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 14:22:49 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1935810156' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 01 14:22:49 compute-0 nova_compute[260022]: 2025-10-01 14:22:49.345 2 DEBUG oslo_service.periodic_task [None req-0864f08d-7814-4d6e-bd04-da2bd7da487c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 01 14:22:49 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15173 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:49 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 01 14:22:49 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/996099488' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 01 14:22:49 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:50 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15175 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 01 14:22:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1305062025' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 14:22:50 compute-0 stoic_sammet[320334]: --> passed data devices: 0 physical, 3 LVM
Oct 01 14:22:50 compute-0 stoic_sammet[320334]: --> relative data size: 1.0
Oct 01 14:22:50 compute-0 stoic_sammet[320334]: --> All data devices are unavailable
Oct 01 14:22:50 compute-0 systemd[1]: libpod-2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a.scope: Deactivated successfully.
Oct 01 14:22:50 compute-0 podman[320317]: 2025-10-01 14:22:50.27789307 +0000 UTC m=+1.206467439 container died 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:22:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bca1144401dd9fc389ba9c5fc1871751fd3dae308fc723f709e6bfbf2986246e-merged.mount: Deactivated successfully.
Oct 01 14:22:50 compute-0 ceph-mon[74802]: from='client.15163 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:50 compute-0 ceph-mon[74802]: from='client.15165 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:50 compute-0 ceph-mon[74802]: from='client.15169 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:50 compute-0 ceph-mon[74802]: from='client.15173 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:50 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/996099488' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 01 14:22:50 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1305062025' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 01 14:22:50 compute-0 podman[320317]: 2025-10-01 14:22:50.339381864 +0000 UTC m=+1.267956212 container remove 2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct 01 14:22:50 compute-0 systemd[1]: libpod-conmon-2f566a592e0382ad2f01367db3955dc13528fd306831846301a8e4b4001bdc7a.scope: Deactivated successfully.
Oct 01 14:22:50 compute-0 sudo[320117]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:50 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15179 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:50 compute-0 sudo[320737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:22:50 compute-0 sudo[320737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:50 compute-0 sudo[320737]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:04.423045+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:05.423205+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:06.423376+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:07.423504+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:08.423680+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:09.423852+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:10.424050+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:11.424310+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:12.424576+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:13.424784+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:14.424928+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:15.425104+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:16.425353+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:17.425609+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:18.425832+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:19.425990+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:20.426160+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:21.426434+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:22.426640+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:23.426783+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:24.426903+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:25.427157+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:26.427352+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:27.427571+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:28.427771+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:29.427919+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:30.428144+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:31.428386+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:32.428582+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:33.428817+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:34.429027+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:35.429157+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6875 writes, 27K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6875 writes, 1441 syncs, 4.77 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 826 writes, 1986 keys, 826 commit groups, 1.0 writes per commit group, ingest: 1.10 MB, 0.00 MB/s
                                           Interval WAL: 826 writes, 369 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:36.429349+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:37.429557+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:38.429828+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:39.430028+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:40.430232+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:41.430376+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:42.430575+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:43.430765+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:44.431002+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:45.431144+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:46.431290+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:47.431459+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:48.431618+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:49.431846+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:50.432046+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:51.432200+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:52.432471+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:53.432719+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:54.432909+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:55.433052+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:56.433244+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:57.433394+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:58.433565+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:59.433711+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:00.433876+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:01.434016+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:02.434249+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:03.434452+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:04.434620+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:05.434783+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:06.434998+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:07.435203+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:08.435362+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:09.435634+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:10.435857+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 26935296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:11.436066+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:12.436261+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:13.436513+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:14.436903+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:15.437092+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:16.437314+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:17.437511+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:18.437702+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:19.437939+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:20.438270+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:21.438508+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:22.438886+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:23.439079+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:24.439301+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:25.439507+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:26.439672+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:27.440930+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:28.441067+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:29.441198+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:30.441360+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:31.441498+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:32.441629+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:33.441830+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:34.441987+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038906 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:35.442133+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 26927104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 216.638732910s of 216.647628784s, submitted: 13
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,1])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:36.442330+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 26886144 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:37.442449+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:38.442600+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:39.442744+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:40.442877+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:41.442993+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:42.443173+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:43.443345+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:44.443528+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:45.443715+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:46.443973+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:47.444220+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:48.444370+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:49.444522+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:50.444713+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 26828800 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:51.444914+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:52.445092+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:53.445260+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:54.445405+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:55.445583+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:56.445776+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:57.445944+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:58.446094+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:59.446234+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:00.446397+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:01.446542+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:02.446705+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:03.446850+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:04.446994+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:05.447207+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:06.447360+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:07.447535+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:08.447722+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:09.447885+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:10.448069+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:11.448229+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:12.448429+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:13.448580+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:14.448770+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:15.448921+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:16.449086+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:17.449216+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 26820608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:18.449365+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:19.449513+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:20.449719+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:21.449895+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:22.450030+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:23.450204+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:24.450397+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:25.450574+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:26.450771+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:27.450917+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:28.451066+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:29.451208+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:30.451345+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:31.451485+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:32.451666+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:33.451817+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:34.451988+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:35.452161+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:36.452319+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:37.452457+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:38.452631+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:39.452803+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:40.453484+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:41.453624+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:42.453799+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:43.453921+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:44.454049+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:45.454208+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:46.454374+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:47.454525+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:48.454661+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:49.454814+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:50.454982+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:51.455130+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:52.455300+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:53.455438+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:54.455571+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:55.455709+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:56.455876+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:57.456037+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:58.456206+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:59.456384+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:00.456561+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:01.456714+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:02.456887+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:03.456990+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:04.457126+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:05.457282+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:06.457581+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:07.457714+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:08.457864+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:09.458014+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:10.458234+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:11.458366+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:12.458482+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:13.459191+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:14.459346+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:15.459535+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038026 data_alloc: 218103808 data_used: 311296
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762c00
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 99.153511047s of 99.443305969s, submitted: 90
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fbdcc000/0x0/0x4ffc00000, data 0xd67ccd/0xe52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 26812416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 161 ms_handle_reset con 0x55b1b2762c00 session 0x55b1b1ce9680
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:16.459686+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b0261800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 26804224 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 162 ms_handle_reset con 0x55b1b0261800 session 0x55b1b1ce9860
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:17.459829+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 26697728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 163 ms_handle_reset con 0x55b1b2762000 session 0x55b1b1ce9e00
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:18.459971+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762400
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 26648576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 164 ms_handle_reset con 0x55b1b2762400 session 0x55b1b1cc9a40
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:19.460122+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 26599424 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fbdbd000/0x0/0x4ffc00000, data 0xd6cff0/0xe61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 165 ms_handle_reset con 0x55b1b2762800 session 0x55b1b267f2c0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fbdb8000/0x0/0x4ffc00000, data 0xd6f8c6/0xe65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:20.460285+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068358 data_alloc: 218103808 data_used: 327680
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 26599424 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1475000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:21.460451+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 166 ms_handle_reset con 0x55b1b1475000 session 0x55b1b267f4a0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 26566656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:22.460615+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 26566656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b0261800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:23.460764+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 167 ms_handle_reset con 0x55b1b0261800 session 0x55b1b2692b40
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 26550272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:24.460947+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fbdab000/0x0/0x4ffc00000, data 0xd75f83/0xe72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 26542080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762400
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:25.461066+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083078 data_alloc: 218103808 data_used: 344064
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.824281693s of 10.095853806s, submitted: 78
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 26492928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 168 ms_handle_reset con 0x55b1b2762000 session 0x55b1b26932c0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:26.461220+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 26353664 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 169 ms_handle_reset con 0x55b1b2762800 session 0x55b1b26a5680
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1474000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:27.461335+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 170 ms_handle_reset con 0x55b1b2762400 session 0x55b1b1c51860
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 26288128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 170 ms_handle_reset con 0x55b1b1474000 session 0x55b1b26a5e00
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b0261800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1474000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:28.461485+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 26222592 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 171 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1bc8000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 171 ms_handle_reset con 0x55b1b0261800 session 0x55b1b1e0b2c0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:29.461841+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762400
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 26140672 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 172 ms_handle_reset con 0x55b1b2762000 session 0x55b1afe63a40
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 172 heartbeat osd_stat(store_statfs(0x4fb191000/0x0/0x4ffc00000, data 0xd7ccd3/0xe79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:30.462088+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097478 data_alloc: 218103808 data_used: 368640
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 26083328 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 173 ms_handle_reset con 0x55b1b2762800 session 0x55b1b27061e0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 173 ms_handle_reset con 0x55b1b2762400 session 0x55b1b1c06000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:31.462232+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 26042368 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:32.462427+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fb994000/0x0/0x4ffc00000, data 0xd7e48e/0xe79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 24961024 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 174 heartbeat osd_stat(store_statfs(0x4fb994000/0x0/0x4ffc00000, data 0xd7e48e/0xe79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:33.462552+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 24961024 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:34.462691+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 79863808 unmapped: 24961024 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:35.462835+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101998 data_alloc: 218103808 data_used: 397312
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b0261800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.224699020s of 10.097949028s, submitted: 242
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 23912448 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 175 ms_handle_reset con 0x55b1b0261800 session 0x55b1b27065a0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 175 heartbeat osd_stat(store_statfs(0x4fa7f0000/0x0/0x4ffc00000, data 0xd7ff2d/0xe7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:36.462990+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:37.463154+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:38.463300+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:39.463521+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa7ea000/0x0/0x4ffc00000, data 0xd835ac/0xe83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:40.463697+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109709 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:41.463832+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1474000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 176 heartbeat osd_stat(store_statfs(0x4fa7ea000/0x0/0x4ffc00000, data 0xd835ac/0xe83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 23896064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:42.464004+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 177 ms_handle_reset con 0x55b1b1474000 session 0x55b1b2706b40
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:43.464956+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 177 heartbeat osd_stat(store_statfs(0x4fa7e7000/0x0/0x4ffc00000, data 0xd8515a/0xe85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:44.465109+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:45.465241+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112484 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:46.465435+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:47.465582+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:48.465714+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:49.465923+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 177 heartbeat osd_stat(store_statfs(0x4fa7e7000/0x0/0x4ffc00000, data 0xd8515a/0xe85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:50.466128+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112484 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:51.466316+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:52.466438+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 23879680 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 177 heartbeat osd_stat(store_statfs(0x4fa7e7000/0x0/0x4ffc00000, data 0xd8515a/0xe85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.135969162s of 17.270618439s, submitted: 89
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:53.466549+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:54.466724+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:55.467050+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:56.467256+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:57.467401+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:58.467679+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:59.467841+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:00.468036+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:01.468177+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:02.468341+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:03.468494+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:04.468642+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:05.468786+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:06.468984+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:07.469127+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:08.469579+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:09.469815+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:10.470024+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:11.470223+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:12.470387+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:13.470560+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:14.470717+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:15.470891+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:16.471030+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:17.471174+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:18.471326+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:19.471496+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:20.471699+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:21.471895+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:22.472047+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:23.472208+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:24.473236+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:25.473394+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:26.473545+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:27.473711+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:28.474069+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:29.474257+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:30.474443+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:31.474653+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:32.474885+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:33.475050+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:34.475223+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:35.475424+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:36.475778+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:37.476057+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:38.476395+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:39.476579+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:40.476905+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:41.477163+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:42.477429+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:43.477619+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:44.477969+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:45.478223+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:46.478439+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:47.478714+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:48.479706+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:49.480653+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:50.481229+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:51.481689+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:52.481864+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:53.482508+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:54.482987+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 23871488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:55.483604+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:56.483869+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:57.484178+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:58.484474+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:59.484887+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:00.485248+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:01.485557+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:02.485878+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:03.486152+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:04.486379+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:05.486595+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:06.486832+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:07.487032+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:08.487343+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:09.487577+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 23863296 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:10.487860+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:11.488096+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:12.488347+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:13.488576+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:14.488928+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:15.489219+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:16.489530+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:17.489766+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:18.490058+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:19.490375+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:20.490781+0000)
Oct 01 14:22:50 compute-0 sudo[320774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:21.490996+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:22.491290+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:23.491539+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:24.491808+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:25.492058+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:26.492296+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:27.492562+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:28.492933+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:29.493183+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:30.493568+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:31.493805+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:32.493993+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:33.494250+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:34.494501+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:35.494755+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 sudo[320774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115266 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 23855104 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets getting new tickets!
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:36.495428+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _finish_auth 0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:36.496266+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 23846912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:37.495771+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 23846912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:38.496032+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 heartbeat osd_stat(store_statfs(0x4fa7e5000/0x0/0x4ffc00000, data 0xd86bbd/0xe88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 23846912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:39.496271+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 107.036346436s of 107.046646118s, submitted: 13
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 23830528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:40.496619+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 179 ms_handle_reset con 0x55b1b2762000 session 0x55b1b2707860
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121695 data_alloc: 218103808 data_used: 421888
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81002496 unmapped: 23822336 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:41.496881+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b2762800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 179 ms_handle_reset con 0x55b1b2762800 session 0x55b1b163b2c0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1c81400
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 179 ms_handle_reset con 0x55b1b1c81400 session 0x55b1af650d20
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 179 heartbeat osd_stat(store_statfs(0x4fa7e1000/0x0/0x4ffc00000, data 0xd8874a/0xe8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b0261800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: mgrc ms_handle_reset ms_handle_reset con 0x55b1b1628000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2102413293
Oct 01 14:22:50 compute-0 ceph-osd[90500]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2102413293,v1:192.168.122.100:6801/2102413293]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: get_auth_request con 0x55b1b2762400 auth_method 0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: mgrc handle_mgr_configure stats_period=5
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 23642112 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:42.497082+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 180 ms_handle_reset con 0x55b1b0261800 session 0x55b1b1c07c20
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1474000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 180 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1afcf00
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b273f800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 180 ms_handle_reset con 0x55b1b273f800 session 0x55b1b1e0a1e0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 23674880 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:43.497250+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1c9e000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 180 heartbeat osd_stat(store_statfs(0x4fa7de000/0x0/0x4ffc00000, data 0xd8a6c7/0xe90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 23748608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:44.497410+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 181 ms_handle_reset con 0x55b1b1c9e000 session 0x55b1af1e2000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1c9e400
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 23724032 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:45.497599+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 182 ms_handle_reset con 0x55b1b1c9e400 session 0x55b1b030da40
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132129 data_alloc: 218103808 data_used: 430080
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 23691264 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:46.497783+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 23691264 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:47.497869+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 23691264 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:48.498023+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa7d8000/0x0/0x4ffc00000, data 0xd8da59/0xe94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:49.498167+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 23691264 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 182 heartbeat osd_stat(store_statfs(0x4fa7d8000/0x0/0x4ffc00000, data 0xd8da59/0xe94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b0261800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.600206375s of 10.002857208s, submitted: 112
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:50.498308+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 23666688 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134112 data_alloc: 218103808 data_used: 430080
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 183 ms_handle_reset con 0x55b1b0261800 session 0x55b1b2706d20
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:51.498448+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 23642112 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1474000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 184 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1c934a0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:52.498631+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 23625728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 sudo[320774]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:53.498801+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 23625728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:54.498923+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 23625728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 185 heartbeat osd_stat(store_statfs(0x4fa7d0000/0x0/0x4ffc00000, data 0xd92c52/0xe9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:55.499063+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 23625728 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141604 data_alloc: 218103808 data_used: 438272
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:56.499177+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:57.499343+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 185 heartbeat osd_stat(store_statfs(0x4fa7d0000/0x0/0x4ffc00000, data 0xd92c52/0xe9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:58.499550+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:59.499815+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:00.500009+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144738 data_alloc: 218103808 data_used: 442368
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:01.500172+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:02.500364+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:03.500535+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:04.500693+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:05.500903+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144738 data_alloc: 218103808 data_used: 442368
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:06.501068+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:07.501242+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:08.501414+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:09.501588+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:10.501787+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144738 data_alloc: 218103808 data_used: 442368
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:11.501894+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:12.502166+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:13.502270+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:14.502457+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:15.502620+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144738 data_alloc: 218103808 data_used: 442368
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:16.502860+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:17.503064+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:18.503241+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:19.503383+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:20.503615+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:21.503839+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:22.503996+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:23.504226+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:24.504437+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:25.504642+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:26.504821+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:27.504995+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:28.505160+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:29.505338+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:30.505556+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:31.505812+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:32.505970+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:33.506052+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:34.506182+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:35.506331+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:36.506487+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:37.506663+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:38.506833+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:39.506989+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:40.507194+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:41.507424+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:42.507584+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:43.507726+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:44.508006+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:45.508210+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:46.508374+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:47.508523+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:48.509287+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:49.509482+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:50.510084+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:51.510278+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:52.510433+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:53.510649+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:54.510845+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:55.511022+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:56.511229+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:57.511390+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:58.511633+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:59.511824+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:00.512019+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:01.512199+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:02.512392+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:03.512554+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:04.512696+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:05.512887+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:06.513102+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:07.513274+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:08.513435+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:09.513563+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:10.513855+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:11.514006+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:12.514172+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:13.514362+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:14.515915+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:15.516123+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:16.516268+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:17.516427+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:18.516667+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:19.517004+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:20.517196+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:21.517350+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:22.518307+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:23.518496+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:24.518653+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:25.518824+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:26.519035+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:27.519218+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:28.519397+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:29.519537+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:30.519780+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:31.520006+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:32.520143+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:33.520297+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:34.520452+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:35.520605+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:36.520832+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:37.521003+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:38.521147+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:39.521328+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:40.521537+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:41.521718+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:42.522018+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:43.522192+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:44.522343+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:45.522491+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:46.522688+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:47.522851+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:48.523065+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:49.523204+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:50.523378+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:51.523519+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:52.523700+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:53.523980+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:54.524132+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:55.524270+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:56.524412+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:57.524567+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:58.524718+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:59.524949+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:00.525254+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:01.525446+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:02.525633+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:03.526024+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:04.526299+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:05.526445+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:06.526603+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:07.526809+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:08.527011+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:09.527207+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:10.527510+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:11.527712+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:12.527933+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:13.528092+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:14.528289+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:15.528487+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:16.528647+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:17.528984+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:18.529181+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:19.529354+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:20.529570+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:21.529713+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:22.529940+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:23.531415+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:24.532371+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:25.532666+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:26.532896+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:27.533427+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:28.534064+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:29.534380+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:30.534904+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:31.535820+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:32.536582+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:33.536819+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:34.537351+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:35.537649+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:36.538160+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:37.538377+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:38.538660+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:39.539084+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:40.539492+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:41.539693+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:42.540038+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:43.540259+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:44.540529+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:45.540845+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:46.541072+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:47.541219+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:48.541453+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:49.541727+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:50.542000+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:51.542169+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:52.542448+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:53.542652+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:54.542834+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:55.543246+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:56.543504+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:57.543838+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:58.544020+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:59.544324+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:00.544857+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:01.545009+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:02.545151+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:03.545285+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:04.545476+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:05.545609+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:06.545791+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:07.545977+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:08.546128+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:09.546321+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:10.546542+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:11.546680+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:12.546864+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:13.547044+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 23617536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:14.547191+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:15.547376+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:16.547531+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:17.547682+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:18.547921+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:19.548102+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:20.548295+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:21.548449+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:22.548622+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:23.548806+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:24.548981+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:25.549177+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:26.549358+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:27.549546+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:28.549782+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:29.549930+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:30.550143+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:31.550279+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:32.550540+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:33.550716+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:34.550946+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:35.551141+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 23609344 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:36.551320+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:37.551547+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:38.551701+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:39.551834+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:40.552078+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:41.552242+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:42.552374+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:43.552525+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:44.552672+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:45.552894+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:46.553082+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:47.553340+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:48.553502+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:49.553872+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:50.554047+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:51.554209+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:52.554382+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:53.554560+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:54.554709+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:55.554873+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:56.555159+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:57.555347+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:58.555525+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:59.555647+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:00.555816+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:01.555961+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:02.556115+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:03.556262+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:04.556430+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:05.556600+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:06.556810+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:07.556965+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:08.557088+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:09.557228+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:10.557392+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:11.557533+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:12.557706+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:13.558366+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:14.558515+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:15.558681+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:16.558827+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:17.558968+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:18.559101+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:19.559238+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:20.559391+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:21.559548+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:22.559679+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:23.559816+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:24.559967+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:25.560137+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:26.560343+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:27.560537+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:28.560710+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:29.560858+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:30.561022+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:31.561186+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:32.561344+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:33.561505+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:34.561819+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 23601152 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:35.561969+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 8168 writes, 30K keys, 8168 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8168 writes, 2028 syncs, 4.03 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1293 writes, 3208 keys, 1293 commit groups, 1.0 writes per commit group, ingest: 1.64 MB, 0.00 MB/s
                                           Interval WAL: 1293 writes, 587 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:36.562113+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:37.562252+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:38.562458+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:39.562684+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:40.562948+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:41.563110+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:42.563344+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:43.563483+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:44.563649+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:45.563790+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:46.563934+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:47.564072+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:48.564219+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:49.564379+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:50.564552+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:51.564667+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:52.564846+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:53.564992+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:54.565306+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:55.565885+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 23592960 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:56.566034+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:57.566188+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:58.566375+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:59.566521+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:00.566846+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:01.567037+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:02.567231+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:03.567405+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:04.567566+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:05.567788+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:06.567929+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:07.568075+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:08.568247+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:09.568383+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:10.568609+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:11.568805+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:12.568977+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:13.569176+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:14.569310+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 23584768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:15.569456+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:16.569600+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:17.569701+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:18.569821+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:19.569956+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:20.570162+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:21.570289+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:22.570420+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:23.570545+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:24.570676+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:25.570799+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:26.570956+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:27.571078+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:28.571272+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:29.571433+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:30.571633+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:31.571802+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145058 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:32.572042+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:33.572337+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:34.572497+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:35.572667+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 23576576 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 346.016723633s of 346.200988770s, submitted: 82
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:36.572865+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 23552000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:37.573105+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 23502848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:38.573244+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:39.573408+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:40.573603+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:41.573853+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:42.574062+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:43.574235+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:44.574411+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:45.574580+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:46.574771+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:47.574927+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:48.575112+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:49.575273+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:50.575446+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:51.575592+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:52.575720+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:53.575866+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:54.576007+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:55.576161+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:56.576321+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:57.576486+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:58.576629+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:59.576791+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:00.576964+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:01.577119+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:02.577288+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:03.577439+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:04.577628+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:05.577772+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:06.577951+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:07.578114+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:08.578283+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:09.578465+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:10.578696+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:11.578864+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:12.579054+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:13.579160+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:14.579277+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:15.579434+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:16.579553+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:17.579690+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:18.579831+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:19.580013+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:20.580199+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:21.580373+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:22.580559+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:23.580695+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:24.580810+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:25.580938+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:26.581074+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:27.581210+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:28.581369+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:29.581537+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:30.581724+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:31.581893+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:32.582024+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:33.582175+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:34.582316+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:35.582451+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:36.582620+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:37.582782+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:38.582913+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:39.583060+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:40.583441+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:41.583698+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:42.583916+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:43.584145+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:44.584368+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:45.584544+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:46.584765+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:47.584959+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:48.585128+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:49.585568+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:50.585784+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:51.585954+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:52.586144+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:53.586340+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:54.586456+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:55.586609+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:56.586765+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:57.586923+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:58.587071+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:59.587242+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:00.587435+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:01.587610+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:02.587792+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:03.587955+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:04.588078+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:05.588203+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:06.588359+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:07.588505+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:08.588674+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:09.588789+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:10.589110+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:11.589246+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:12.589383+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:13.589517+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:14.589660+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:15.589808+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:16.590021+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:17.590171+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:18.590322+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:19.590535+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:20.590828+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:21.590943+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:22.591154+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:23.591336+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:24.591666+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:25.591794+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:26.591953+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:27.592306+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:28.592623+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:29.592794+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 23494656 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:30.593116+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:31.593451+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:32.593695+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:33.593897+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:34.594098+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:35.594349+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:36.594560+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:37.594790+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:38.594965+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:39.595166+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 23486464 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:40.595867+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:41.596057+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:42.596192+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:43.596414+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:44.596695+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:45.598429+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:46.599619+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:47.600489+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:48.600981+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:49.601925+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:50.602912+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:51.603068+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:52.603431+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:53.603673+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:54.604078+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:55.604342+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:56.604480+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:57.604838+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:58.605101+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:59.605402+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:00.605620+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:01.605860+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:02.606033+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:03.606289+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:04.606575+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:05.606791+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:06.606973+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:07.607305+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:08.607572+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:09.607751+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:10.608026+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:11.608247+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:12.608480+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:13.608690+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:14.608993+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:15.609199+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:16.609479+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:17.609692+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:18.609836+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:19.610040+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:20.610267+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:21.610433+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:22.610646+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:23.610879+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:24.611053+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:25.611327+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:26.611513+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:27.611663+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:28.611834+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:29.611966+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:30.612172+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:31.612311+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:32.612439+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:33.612629+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:34.612847+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:35.612968+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 sudo[320810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:36.613098+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:37.613255+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:38.613506+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:39.613789+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:40.614045+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:41.614202+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:42.614359+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:43.614503+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:44.614653+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:45.614860+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:46.615018+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:47.615219+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:48.615373+0000)
Oct 01 14:22:50 compute-0 sudo[320810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:49.615539+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:50.615872+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:51.616062+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:52.616175+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:53.616343+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:54.616492+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:55.616634+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:56.616822+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:57.616945+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:58.617084+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:59.617231+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:00.617417+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:01.617529+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:02.617680+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:03.617833+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:04.618006+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:05.618126+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:06.618252+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:07.618424+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:08.618548+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 sudo[320810]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:09.618694+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:10.618924+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:11.619104+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:12.619233+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:13.619378+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 23478272 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:14.619560+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:15.619712+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:16.620226+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:17.620457+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:18.620635+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:19.620792+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:20.621099+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:21.621267+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:22.621455+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:23.621618+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:24.621800+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:25.622026+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:26.622276+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:27.622417+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:28.622563+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:29.622776+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:30.623010+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:31.623205+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:32.623501+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:33.623717+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:34.623978+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:35.624208+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:36.624386+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:37.624536+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:38.624829+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:39.625008+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:40.625209+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:41.625373+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:42.625555+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:43.625783+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:44.625959+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:45.626125+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:46.626288+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:47.626449+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:48.626614+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:49.626815+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:50.627021+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:51.627213+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:52.627405+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:53.627586+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:54.627665+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:55.627791+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:56.627932+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:57.628112+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:58.628296+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:59.628432+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:00.628599+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:01.628797+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:02.628984+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:03.629120+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:04.629412+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:05.629614+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:06.629749+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 23470080 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:07.629885+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:08.630046+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:09.630203+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:10.630411+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:11.630517+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:12.630671+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:13.630796+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:14.630961+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:15.631103+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:16.631292+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:17.631439+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 23461888 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:18.631611+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:19.631828+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:20.632067+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:21.632199+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:22.632346+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:23.632467+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:24.632625+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:25.632872+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:26.633117+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:27.633302+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:28.633517+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:29.633692+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:30.633889+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:31.634084+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:32.634262+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:33.634447+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:34.634610+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:35.634792+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:36.634943+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:37.635147+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:38.643010+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:39.643191+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:40.643456+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:41.643698+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:42.643888+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:43.644104+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:44.644296+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:45.644442+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:46.644607+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:47.644800+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:48.644957+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:49.645102+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:50.645303+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:51.645463+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:52.645640+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:53.645848+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:54.646042+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:55.646225+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:56.646375+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:57.646517+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:58.646870+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:59.647251+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:00.647922+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:01.648434+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:02.648890+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:03.649342+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:04.649540+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:05.649896+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:06.650298+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:07.650586+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:08.650890+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:09.651075+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:10.651314+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:11.651508+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:12.651666+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:13.651995+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:14.652147+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:15.652258+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:16.652394+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:17.652577+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:18.652715+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:19.652942+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:20.653205+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:21.653423+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:22.653701+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:23.653876+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:24.654046+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:25.654214+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:26.654375+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:27.654586+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:28.654792+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:29.654987+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:30.655395+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:31.656493+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:32.657513+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:33.657857+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:34.658007+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:35.658528+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:36.659117+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:37.659576+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:38.660068+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:39.660424+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:40.660776+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:41.660921+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:42.661108+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:43.661288+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:44.661427+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:45.661593+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:46.661788+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:47.661929+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:48.662054+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:49.662235+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:50.662547+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:51.662784+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:52.662992+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 23453696 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:53.663200+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:54.663338+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:55.663600+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:56.663828+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:57.664394+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:58.664637+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:59.664837+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:00.665013+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:01.665140+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:02.665889+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:03.666135+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:04.666434+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:05.666972+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:06.667233+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:07.667639+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:08.668028+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:09.668275+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:10.668613+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:11.668816+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:12.669077+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:13.669257+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:14.669401+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:15.669561+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:16.669717+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:17.669913+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:18.670072+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:19.670220+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:20.670501+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:21.670665+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:22.670815+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:23.670961+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:24.671142+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:25.671289+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:26.671401+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:27.671526+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:28.671646+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:29.671795+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:30.672046+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:31.672278+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:32.672476+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:33.672656+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:34.672795+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:35.672986+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:36.673137+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:37.673270+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:38.673411+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144178 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:39.673542+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:40.673667+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:41.673872+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:42.674067+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:43.674201+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 23445504 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1c9e000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 427.182464600s of 427.830413818s, submitted: 90
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144426 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:44.674352+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fa7ce000/0x0/0x4ffc00000, data 0xd946b5/0xea0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 187 ms_handle_reset con 0x55b1b1c9e000 session 0x55b1af1adc20
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:45.674494+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:46.674632+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fa7ca000/0x0/0x4ffc00000, data 0xd96263/0xea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:47.674816+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:48.674981+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147832 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:49.675123+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:50.675321+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:51.675471+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 187 heartbeat osd_stat(store_statfs(0x4fa7ca000/0x0/0x4ffc00000, data 0xd96263/0xea2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:52.675680+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 23429120 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:53.675929+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:54.676070+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:55.676293+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:56.676563+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:57.676851+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:58.677025+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:59.677198+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:00.677416+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:01.677593+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:02.677863+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:03.678095+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:04.678288+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:05.679287+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:06.679461+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:07.679613+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:08.679786+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:09.679935+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:10.680128+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:11.680338+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:12.680474+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:13.680617+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:14.680802+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:15.680969+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:16.681128+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:17.681326+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:18.681488+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:19.681667+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:20.681966+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:21.682135+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:22.682312+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:23.682489+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:24.682670+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:25.682916+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:26.683142+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:27.683316+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:28.683508+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:29.683671+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:30.683788+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:31.683951+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:32.684100+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:33.684274+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:34.684404+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 23420928 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:35.684552+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:36.684895+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:37.685066+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:38.685256+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:39.685562+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:40.685825+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:41.686055+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:42.686297+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:43.686481+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:44.686852+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:45.686987+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:46.687428+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:47.687795+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:48.687990+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:49.688204+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:50.688436+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:51.688798+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:52.689011+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:53.689367+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:54.689666+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:55.689789+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:56.689894+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:57.690133+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:58.690335+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:59.690586+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:00.690871+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:01.691075+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:02.691292+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:03.691525+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:04.691712+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:05.691958+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:06.692140+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:07.692385+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:08.692605+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:09.692819+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:10.693021+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:11.693221+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:12.693385+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:13.693579+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:14.693787+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:15.693964+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:16.694155+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:17.694357+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:18.694523+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:19.694690+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:20.694926+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:21.695135+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:22.695311+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:23.695580+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:24.695881+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:25.696175+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:26.696512+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:27.696847+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:28.697071+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:29.697374+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:30.697594+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:31.697856+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:32.698092+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:33.698315+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150614 data_alloc: 218103808 data_used: 458752
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:34.698629+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c8000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:35.698875+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 8417 writes, 30K keys, 8417 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8417 writes, 2145 syncs, 3.92 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 249 writes, 454 keys, 249 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s
                                           Interval WAL: 249 writes, 117 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b273f800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 111.430877686s of 112.040237427s, submitted: 42
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 ms_handle_reset con 0x55b1b273f800 session 0x55b1b1c7b0e0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1c9e800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 23412736 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 ms_handle_reset con 0x55b1b1c9e800 session 0x55b1b1c07860
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:36.699042+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c9000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:37.699193+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:38.699346+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161254 data_alloc: 218103808 data_used: 5115904
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:39.699493+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:40.699670+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c9000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:41.699817+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 heartbeat osd_stat(store_statfs(0x4fa7c9000/0x0/0x4ffc00000, data 0xd97cc6/0xea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:42.699992+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:43.700131+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b0261800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161726 data_alloc: 218103808 data_used: 5115904
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:44.700381+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 18751488 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:45.700512+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 189 ms_handle_reset con 0x55b1b0261800 session 0x55b1afdb01e0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:46.700701+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:47.700900+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 189 heartbeat osd_stat(store_statfs(0x4fac36000/0x0/0x4ffc00000, data 0x929864/0xa36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1474000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.288143158s of 12.452685356s, submitted: 58
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:48.701065+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 190 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1ae21e0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071425 data_alloc: 218103808 data_used: 466944
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:49.701247+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:50.701470+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:51.701643+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 22192128 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 190 heartbeat osd_stat(store_statfs(0x4fb434000/0x0/0x4ffc00000, data 0x12b435/0x239000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:52.701852+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:53.702079+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074399 data_alloc: 218103808 data_used: 466944
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:54.702231+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 191 heartbeat osd_stat(store_statfs(0x4fb431000/0x0/0x4ffc00000, data 0x12ceb4/0x23c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:55.702387+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:56.702624+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 22183936 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:57.702855+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 22175744 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:58.703724+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 22175744 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:59.703987+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077373 data_alloc: 218103808 data_used: 466944
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1c9e000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.165143013s of 11.253076553s, submitted: 44
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 192 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x12e917/0x23f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 21110784 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:00.704182+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 ms_handle_reset con 0x55b1b1c9e000 session 0x55b1b122de00
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:01.704444+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:02.704640+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:03.704869+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:04.705171+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:05.705377+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:06.705671+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:07.705905+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:08.706164+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:09.706450+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:10.706714+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:11.707018+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:12.707343+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:13.707605+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:14.707843+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:15.708040+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:16.708195+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:17.708362+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:18.708514+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:19.708702+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:20.709037+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:21.709246+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:22.709522+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:23.709722+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:24.709987+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:25.710174+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:26.710308+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:27.710454+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:28.710596+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:29.710855+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:30.711106+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:31.711294+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:32.711450+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb429000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:33.711582+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:34.711874+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085855 data_alloc: 218103808 data_used: 475136
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.367374420s of 35.459510803s, submitted: 20
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 21094400 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:35.712029+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 21078016 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:36.712208+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 20971520 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:37.712413+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:38.712552+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:39.712706+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:40.712925+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:41.713067+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:42.713233+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:43.713417+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:44.713604+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:45.713843+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 20914176 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:46.713991+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:47.714139+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:48.714408+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:49.714510+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:50.714668+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:51.714901+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:52.715095+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:53.715234+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:54.715396+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:55.715540+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:56.715798+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:57.715936+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:58.716163+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:59.716368+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:00.716545+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:01.716777+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 20905984 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:02.716936+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:03.717134+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:04.717336+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:05.717615+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:06.717858+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:07.718064+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:08.718259+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:09.718397+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:10.718612+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:11.718836+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:12.718989+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:13.719268+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:14.719488+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:15.719705+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:16.720041+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:17.720208+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:18.720387+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:19.720584+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:20.720772+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:21.720917+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:22.721050+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:23.721181+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:24.721420+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:25.721589+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:26.721825+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:27.721982+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:28.722102+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:29.722265+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:30.722442+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:31.722628+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:32.722799+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:33.722942+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:34.723138+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:35.723265+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:36.723441+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:37.723585+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:38.723794+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:39.723934+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085983 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:40.724138+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:41.724341+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 sudo[320845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- lvm list --format json
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:42.724519+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b273f800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 67.924308777s of 68.302268982s, submitted: 108
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:43.724797+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 heartbeat osd_stat(store_statfs(0x4fb42a000/0x0/0x4ffc00000, data 0x1304c7/0x244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:44.725004+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 194 ms_handle_reset con 0x55b1b273f800 session 0x55b1b2692d20
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091971 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1c9ec00
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 20897792 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:45.725169+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 194 ms_handle_reset con 0x55b1b1c9ec00 session 0x55b1b26925a0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:46.725321+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:47.725495+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:48.725686+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:49.725822+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:50.726001+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:51.726189+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:52.726361+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:53.726583+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:54.726848+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 20881408 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:55.727056+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:56.727248+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 sudo[320845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:57.727470+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:58.727631+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:59.727847+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:00.728015+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:01.728197+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:02.728338+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:03.728481+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:04.728652+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:05.728829+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:06.729043+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:07.729237+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:08.729512+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 195 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:09.729770+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095441 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:10.729973+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 20873216 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b0261800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.234910965s of 27.315486908s, submitted: 7
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 196 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x133bd1/0x24b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,1])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:11.730152+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 20824064 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 196 ms_handle_reset con 0x55b1b0261800 session 0x55b1b26923c0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:12.730388+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:13.730554+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 196 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x13576f/0x24c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:14.730793+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 196 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x13576f/0x24c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095342 data_alloc: 218103808 data_used: 528384
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:15.730963+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:16.731139+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:17.731353+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 20774912 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 196 heartbeat osd_stat(store_statfs(0x4fb421000/0x0/0x4ffc00000, data 0x13576f/0x24c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:18.731540+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:19.731837+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:20.732063+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:21.732221+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:22.732400+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:23.732665+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:24.732821+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:25.733095+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:26.733342+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:27.733655+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:28.733960+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:29.734214+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:30.734527+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:31.734828+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:32.734989+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:33.735247+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:34.735426+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:35.735640+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:36.735841+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:37.736137+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:38.736429+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:39.736643+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:40.736822+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:41.737083+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:42.737358+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:43.737590+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:44.737795+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:45.737978+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:46.738208+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:47.738445+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:48.738656+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:49.738813+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:50.739076+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:51.739267+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:52.739505+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:53.739753+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:54.740038+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:55.740268+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:56.742354+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:57.745056+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:58.748505+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:59.748920+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:00.749071+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:01.750095+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:02.752271+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:03.752580+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:04.753784+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:05.755047+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:06.755387+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:07.755572+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:08.755888+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:09.756151+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:10.756335+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:11.756493+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:12.756618+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:13.757110+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:14.757252+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:15.757519+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:16.757698+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:17.758020+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:18.758352+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:19.758563+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:20.758856+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:21.759214+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:22.759578+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:23.759860+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:24.760102+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:25.760320+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:26.760625+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:27.760779+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:28.760966+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:29.761111+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:30.761267+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:31.761406+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:32.761532+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:33.761696+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:34.761849+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:35.761996+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 20766720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:36.762117+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:37.762184+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:38.762303+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:39.762480+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:40.762704+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:41.762860+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:42.763018+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:43.763177+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:44.763324+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:45.763484+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:46.763654+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:47.763785+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:48.763957+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:49.764142+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:50.764391+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:51.764569+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:52.764718+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:53.764968+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:54.765142+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:55.765358+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:56.765549+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:57.765825+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:58.765994+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:59.766156+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:00.766523+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:01.766714+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:02.766935+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:03.767042+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:04.767278+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:05.767497+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:06.767657+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:07.767803+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:08.767931+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:09.768092+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:10.768266+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:11.768440+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:12.768579+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:13.768764+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:14.768924+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:15.769051+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:16.769182+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:17.769302+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:18.769431+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:19.769556+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:20.769813+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:21.769981+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:22.770123+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:23.770273+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:24.770410+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:25.770648+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:26.770808+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:27.770972+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:28.771114+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:29.771317+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:30.771573+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:31.771819+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:32.771974+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:33.772149+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:34.772352+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:35.773048+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:36.773475+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:37.774005+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:38.774313+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:39.774710+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:40.775003+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:41.775241+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:42.775430+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:43.775628+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:44.775913+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:45.776264+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:46.776492+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:47.776681+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:48.776932+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:49.777187+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:50.777430+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:51.777608+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:52.777845+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:53.778058+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:54.778212+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:55.778404+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:56.778574+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:57.778810+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:58.779008+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:59.779210+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:00.779456+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:01.779608+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:02.779760+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:03.779932+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:04.780099+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:05.780303+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:06.780526+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:07.780719+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:08.780951+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:09.781181+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:10.781468+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:11.781650+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:12.781755+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:13.781909+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:14.782035+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:15.782215+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:16.782364+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:17.782521+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:18.782619+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:19.782795+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:20.783038+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:21.783158+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:22.783281+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:23.783415+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:24.783546+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:25.783718+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:26.783895+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:27.784028+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:28.784137+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:29.784251+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:30.784410+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:31.784561+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:32.784701+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:33.784860+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:34.785006+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:35.785161+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:36.785350+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:37.785460+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:38.785672+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:39.785875+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:40.786089+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:41.786233+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:42.786433+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:43.786576+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:44.786855+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:45.787057+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:46.787186+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:47.787285+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:48.787503+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:49.787777+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:50.788018+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:51.788216+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:52.788410+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:53.788558+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:54.788768+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:55.789086+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:56.789266+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:57.789440+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:58.789571+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:59.789678+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:00.789963+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:01.790174+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:02.790405+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:03.790631+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:04.790847+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:05.791072+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:06.791405+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:07.791633+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:08.791834+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:09.791998+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:10.792171+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:11.792334+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:12.792502+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:13.792690+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:14.792938+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:15.793116+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:16.793304+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:17.793503+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:18.793715+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:19.793939+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:20.794195+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:21.794343+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:22.794546+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:23.794826+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:24.795068+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:25.795252+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:26.795512+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:27.795702+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:28.795910+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:29.796112+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:30.796304+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:31.796499+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:32.796702+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:33.796938+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:34.797138+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:35.797388+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:36.797568+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:37.797826+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:38.798002+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:39.798205+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:40.798417+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:41.798580+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:42.798811+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:43.798940+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:44.799149+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:45.799352+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:46.799535+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:47.799716+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:48.799844+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:49.800055+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:50.800316+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:51.800566+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:52.800720+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:53.800900+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:54.801073+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:55.801244+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:56.801428+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:57.801597+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:58.801813+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:59.801989+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:00.802216+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:01.802334+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:02.802459+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:03.802593+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:04.802706+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:05.802944+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:06.803077+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:07.803294+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:08.803525+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:09.803657+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:10.803819+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:11.803961+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:12.804105+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:13.804237+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:14.804325+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:15.804536+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:16.804900+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:17.805224+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:18.805414+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:19.805587+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:20.805833+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:21.805950+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:22.806147+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:23.806266+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:24.806425+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:25.806597+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:26.806822+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:27.806983+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:28.807143+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:29.807228+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:30.807416+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:31.807545+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:32.807701+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:33.807829+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:34.807975+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:35.808097+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:36.808269+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:37.808492+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:38.808671+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:39.808799+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:40.809255+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:41.809388+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:42.809520+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:43.809636+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:44.809776+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:45.809901+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:46.810010+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:47.810182+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:48.810361+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:49.810530+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:50.810828+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:51.811009+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:52.811207+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:53.811376+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:54.811499+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:55.811637+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:56.811770+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:57.811962+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:58.812104+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:59.812282+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:00.812451+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:01.812612+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:02.812804+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:03.813000+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:04.813229+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:05.813381+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:06.813636+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:07.813796+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:08.813922+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:09.814129+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:10.814311+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:11.814482+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:12.814636+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:13.814869+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:14.815082+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:15.815235+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:16.815376+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:17.815557+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:18.816708+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:19.817527+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:20.817865+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:21.818760+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:22.819539+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:23.820180+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:24.820726+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:25.821249+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:26.821781+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:27.822120+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:28.822460+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:29.822816+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:30.823160+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:31.823453+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:32.823759+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:33.823934+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1302708969' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:34.824118+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:35.824413+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:36.824623+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:37.824783+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:38.824938+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:39.825120+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:40.825326+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:41.825471+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:42.825590+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:43.825709+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:44.825894+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:45.826167+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:46.826418+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:47.826566+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:48.826803+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:49.826991+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:50.827225+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:51.827370+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:52.827543+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:53.827725+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:54.827960+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:55.828136+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:56.828301+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:57.828489+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:58.828652+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:59.828811+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:00.829034+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:01.829199+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:02.829410+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:03.829633+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:04.829801+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:05.830016+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:06.830196+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:07.830374+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:08.830554+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:09.830783+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:10.831016+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:11.831191+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:12.831315+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:13.831550+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:14.831855+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:15.832087+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:16.832384+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:17.832563+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:18.832768+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:19.832980+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:20.833203+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:21.833362+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:22.833558+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:23.834279+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:24.834435+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:25.835106+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:26.835689+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:27.836187+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:28.836427+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:29.836593+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:30.836871+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:31.837066+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:32.837431+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:33.837788+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:34.838016+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 8971 writes, 31K keys, 8971 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8971 writes, 2398 syncs, 3.74 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 554 writes, 1220 keys, 554 commit groups, 1.0 writes per commit group, ingest: 0.58 MB, 0.00 MB/s
                                           Interval WAL: 554 writes, 253 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:35.838248+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:36.838394+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:37.838549+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:38.838782+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:39.839095+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:40.839659+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:41.839944+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:42.840119+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:43.840289+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:44.840524+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:45.840692+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:46.840993+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:47.841229+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:48.841467+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:49.841678+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:50.842011+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:51.842280+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:52.842514+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:53.842687+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:54.843004+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:55.843376+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x1371d2/0x24f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:56.844103+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099340 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1474000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 465.014099121s of 466.305084229s, submitted: 64
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 198 ms_handle_reset con 0x55b1b1474000 session 0x55b1b1650d20
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 20742144 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:57.844290+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 20742144 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:58.844422+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1c9e000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 198 heartbeat osd_stat(store_statfs(0x4fb41c000/0x0/0x4ffc00000, data 0x138d93/0x251000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 20742144 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:59.844584+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:00.844790+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:01.844965+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104575 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:02.845112+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:03.845323+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:04.845552+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 199 ms_handle_reset con 0x55b1b1c9e000 session 0x55b1af1ef0e0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fb419000/0x0/0x4ffc00000, data 0x13a964/0x254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:05.845813+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:06.846600+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103695 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:07.846809+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 20758528 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:08.846956+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 199 heartbeat osd_stat(store_statfs(0x4fb41a000/0x0/0x4ffc00000, data 0x13a964/0x254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.181988716s of 11.751212120s, submitted: 34
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b273f800
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 20676608 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:09.847151+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 ms_handle_reset con 0x55b1b273f800 session 0x55b1b1c503c0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:10.847403+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:11.847599+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:12.847789+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:13.848092+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:14.848266+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:15.848436+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:16.848670+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:17.849026+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:18.849270+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:19.849459+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:20.849687+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:21.849816+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:22.850020+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:23.850242+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:24.850429+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:25.850594+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:26.850883+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:27.851106+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:28.851293+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:29.851479+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:30.851658+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:31.851848+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1147713 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:32.852037+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:33.852162+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:34.852332+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:35.852469+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 20668416 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa0000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.502782822s of 27.594612122s, submitted: 33
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:36.852583+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 20643840 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:37.852704+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 20611072 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:38.852821+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 20545536 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:39.852933+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 20512768 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:40.853098+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 20488192 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:41.853234+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:42.853349+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:43.853487+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:44.853659+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:45.853887+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:46.854051+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:47.854221+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:48.854393+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:49.854658+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:50.854838+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:51.855034+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:52.855173+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:53.855302+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:54.855504+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:55.855724+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 heartbeat osd_stat(store_statfs(0x4fafa2000/0x0/0x4ffc00000, data 0x5adf8a/0x6cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:56.856103+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145697 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:57.856371+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:58.856579+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 20480000 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: handle_auth_request added challenge on 0x55b1b1c9f000
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:59.856756+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.144542694s of 23.247913361s, submitted: 90
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _renew_subs
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 202 ms_handle_reset con 0x55b1b1c9f000 session 0x55b1b26a5e00
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:00.857455+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 202 heartbeat osd_stat(store_statfs(0x4fb40e000/0x0/0x4ffc00000, data 0x13fb15/0x25d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 202 heartbeat osd_stat(store_statfs(0x4fb40e000/0x0/0x4ffc00000, data 0x13fb15/0x25d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:01.858018+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118181 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:02.858516+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:03.858901+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:04.859305+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:05.859592+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:06.859832+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118181 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 202 heartbeat osd_stat(store_statfs(0x4fb40e000/0x0/0x4ffc00000, data 0x13fb15/0x25d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:07.860029+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 20430848 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:08.860253+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:09.860408+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:10.860573+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:11.860701+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120963 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:12.860826+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:13.860969+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:14.861085+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:15.861212+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:16.861364+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120963 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:17.861542+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:18.861721+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:19.861957+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:20.862139+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:21.862370+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:22.862587+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:23.862795+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:24.862944+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:25.863099+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:26.863333+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:27.863465+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:28.863596+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:29.863841+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:30.864062+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:31.864232+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:32.864927+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:33.865429+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:34.865656+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:35.866769+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:36.867649+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:37.868035+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:38.868418+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:39.868694+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:40.869009+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:41.869427+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:42.869821+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:43.870168+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:44.870502+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:45.870747+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:46.871160+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:47.871338+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:48.871474+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:49.871839+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:50.872138+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:51.872369+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:52.872578+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:53.872771+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:54.872927+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:55.873266+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:56.873535+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:57.873809+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:58.873973+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:59.874178+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:00.874453+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:01.874626+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:02.874873+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:03.874989+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:04.875274+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:05.875425+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:06.875580+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:07.875692+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:08.875804+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:09.875934+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:10.880061+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:11.880185+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:12.880282+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:13.880412+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:14.880541+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:15.880676+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 20471808 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:16.880830+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 20324352 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:50 compute-0 ceph-osd[90500]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:50 compute-0 ceph-osd[90500]: bluestore.MempoolThread(0x55b1adc65b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121123 data_alloc: 218103808 data_used: 557056
Oct 01 14:22:50 compute-0 ceph-osd[90500]: do_command 'config diff' '{prefix=config diff}'
Oct 01 14:22:50 compute-0 ceph-osd[90500]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:17.880936+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: do_command 'config show' '{prefix=config show}'
Oct 01 14:22:50 compute-0 ceph-osd[90500]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 14:22:50 compute-0 ceph-osd[90500]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 14:22:50 compute-0 ceph-osd[90500]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 14:22:50 compute-0 ceph-osd[90500]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 14:22:50 compute-0 ceph-osd[90500]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 19931136 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: osd.2 203 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x141578/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:18.881050+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 19808256 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: tick
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_tickets
Oct 01 14:22:50 compute-0 ceph-osd[90500]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:19.881186+0000)
Oct 01 14:22:50 compute-0 ceph-osd[90500]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 19742720 heap: 104824832 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:50 compute-0 ceph-osd[90500]: do_command 'log dump' '{prefix=log dump}'
Oct 01 14:22:50 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15183 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:50 compute-0 podman[320945]: 2025-10-01 14:22:50.981103565 +0000 UTC m=+0.044142593 container create 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 14:22:50 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 14:22:51 compute-0 systemd[1]: Started libpod-conmon-571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9.scope.
Oct 01 14:22:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:22:51 compute-0 podman[320945]: 2025-10-01 14:22:50.960652845 +0000 UTC m=+0.023691893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:22:51 compute-0 podman[320945]: 2025-10-01 14:22:51.069534003 +0000 UTC m=+0.132573061 container init 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 01 14:22:51 compute-0 podman[320945]: 2025-10-01 14:22:51.075804813 +0000 UTC m=+0.138843841 container start 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:22:51 compute-0 pensive_kapitsa[320983]: 167 167
Oct 01 14:22:51 compute-0 systemd[1]: libpod-571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9.scope: Deactivated successfully.
Oct 01 14:22:51 compute-0 podman[320945]: 2025-10-01 14:22:51.08924851 +0000 UTC m=+0.152287538 container attach 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:22:51 compute-0 podman[320945]: 2025-10-01 14:22:51.089602211 +0000 UTC m=+0.152641239 container died 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct 01 14:22:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 01 14:22:51 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412661162' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 14:22:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-63cb7be7151d75fd8b56aa9d7dc3ec9163be9d274aace8164022a23b91ab3dae-merged.mount: Deactivated successfully.
Oct 01 14:22:51 compute-0 podman[320945]: 2025-10-01 14:22:51.141892072 +0000 UTC m=+0.204931100 container remove 571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:22:51 compute-0 systemd[1]: libpod-conmon-571c6396770189dfa1c73ea2ae8a1a925a7de3cb8553714c688cf7e1cfb16af9.scope: Deactivated successfully.
Oct 01 14:22:51 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:51 compute-0 podman[321034]: 2025-10-01 14:22:51.302996319 +0000 UTC m=+0.045033962 container create dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct 01 14:22:51 compute-0 systemd[1]: Started libpod-conmon-dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12.scope.
Oct 01 14:22:51 compute-0 ceph-mon[74802]: pgmap v2414: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:51 compute-0 ceph-mon[74802]: from='client.15175 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:51 compute-0 ceph-mon[74802]: from='client.15179 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:51 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1302708969' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 01 14:22:51 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1412661162' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 01 14:22:51 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:22:51 compute-0 podman[321034]: 2025-10-01 14:22:51.281108954 +0000 UTC m=+0.023146617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:51 compute-0 podman[321034]: 2025-10-01 14:22:51.411546996 +0000 UTC m=+0.153584659 container init dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 01 14:22:51 compute-0 podman[321034]: 2025-10-01 14:22:51.419476278 +0000 UTC m=+0.161513931 container start dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 01 14:22:51 compute-0 podman[321034]: 2025-10-01 14:22:51.429695713 +0000 UTC m=+0.171733366 container attach dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:22:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 01 14:22:51 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812789747' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 14:22:51 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15191 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:51 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:51 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 01 14:22:51 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3821354881' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 14:22:51 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15195 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]: {
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:     "0": [
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:         {
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "devices": [
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "/dev/loop3"
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             ],
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_name": "ceph_lv0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_size": "21470642176",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "name": "ceph_lv0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "tags": {
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.block_uuid": "wGj4vh-erWx-nH21-oq8z-ueHp-078E-I6PPeS",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.cluster_name": "ceph",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.crush_device_class": "",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.encrypted": "0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.osd_fsid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.osd_id": "0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.type": "block",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.vdo": "0"
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             },
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "type": "block",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "vg_name": "ceph_vg0"
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:         }
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:     ],
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:     "1": [
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:         {
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "devices": [
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "/dev/loop4"
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             ],
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_name": "ceph_lv1",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_size": "21470642176",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f5852bc7-e830-489a-b8a9-42dfbbe71426,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "name": "ceph_lv1",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "path": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "tags": {
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.block_uuid": "BkdXfz-nWgz-mEXw-iJms-8lq8-Wdov-sMY2rY",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.cluster_name": "ceph",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.crush_device_class": "",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.encrypted": "0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.osd_fsid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.osd_id": "1",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.type": "block",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.vdo": "0"
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             },
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "type": "block",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "vg_name": "ceph_vg1"
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:         }
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:     ],
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:     "2": [
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:         {
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "devices": [
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "/dev/loop5"
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             ],
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_name": "ceph_lv2",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_size": "21470642176",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=eb4b6ead-01d1-53b3-a52a-47dcc600555f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c4c937e2-a8a8-47c3-af37-fdedb6fff1f9,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "lv_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "name": "ceph_lv2",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "path": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "tags": {
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.block_uuid": "mFr8Et-n9mD-SOrI-A5Qn-f0Ir-Pu16-h0PpgK",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.cephx_lockbox_secret": "",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.cluster_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.cluster_name": "ceph",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.crush_device_class": "",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.encrypted": "0",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.osd_fsid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.osd_id": "2",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.type": "block",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:                 "ceph.vdo": "0"
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             },
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "type": "block",
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:             "vg_name": "ceph_vg2"
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:         }
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]:     ]
Oct 01 14:22:52 compute-0 happy_dubinsky[321055]: }
Oct 01 14:22:52 compute-0 systemd[1]: libpod-dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12.scope: Deactivated successfully.
Oct 01 14:22:52 compute-0 podman[321034]: 2025-10-01 14:22:52.149911148 +0000 UTC m=+0.891948811 container died dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct 01 14:22:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f86a64f38e5273b8d7efc9b37096ec17c78a1aa6b466fc0c8e8889beb3447a2-merged.mount: Deactivated successfully.
Oct 01 14:22:52 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 01 14:22:52 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997416852' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 01 14:22:52 compute-0 ceph-mon[74802]: from='client.15183 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:52 compute-0 ceph-mon[74802]: from='client.15187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:52 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1812789747' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 01 14:22:52 compute-0 ceph-mon[74802]: from='client.15191 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:52 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3821354881' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 01 14:22:52 compute-0 podman[321034]: 2025-10-01 14:22:52.410820574 +0000 UTC m=+1.152858257 container remove dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:22:52 compute-0 systemd[1]: libpod-conmon-dad22e3df1c6afdfaeaaaa0c57319ebf450ccca1f434c10a211d7a62256b6d12.scope: Deactivated successfully.
Oct 01 14:22:52 compute-0 sudo[320845]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:52 compute-0 sudo[321212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:22:52 compute-0 sudo[321212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:52 compute-0 sudo[321212]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:52 compute-0 sudo[321275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 01 14:22:52 compute-0 sudo[321275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:52 compute-0 sudo[321275]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:52 compute-0 sudo[321300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:22:52 compute-0 sudo[321300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:52 compute-0 sudo[321300]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:52 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15201 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:52 compute-0 ceph-mgr[75103]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 14:22:52 compute-0 ceph-eb4b6ead-01d1-53b3-a52a-47dcc600555f-mgr-compute-0-puxjpb[75099]: 2025-10-01T14:22:52.726+0000 7f13b53e1640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct 01 14:22:52 compute-0 sudo[321327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/eb4b6ead-01d1-53b3-a52a-47dcc600555f/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid eb4b6ead-01d1-53b3-a52a-47dcc600555f -- raw list --format json
Oct 01 14:22:52 compute-0 sudo[321327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:53 compute-0 podman[321437]: 2025-10-01 14:22:53.073560783 +0000 UTC m=+0.043133021 container create 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 01 14:22:53 compute-0 systemd[1]: Started libpod-conmon-924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174.scope.
Oct 01 14:22:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 01 14:22:53 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206076676' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 01 14:22:53 compute-0 podman[321437]: 2025-10-01 14:22:53.049322173 +0000 UTC m=+0.018894421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:22:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:22:53 compute-0 podman[321437]: 2025-10-01 14:22:53.172635939 +0000 UTC m=+0.142208207 container init 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 01 14:22:53 compute-0 podman[321437]: 2025-10-01 14:22:53.185679724 +0000 UTC m=+0.155252002 container start 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct 01 14:22:53 compute-0 modest_elion[321453]: 167 167
Oct 01 14:22:53 compute-0 systemd[1]: libpod-924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174.scope: Deactivated successfully.
Oct 01 14:22:53 compute-0 podman[321437]: 2025-10-01 14:22:53.192512621 +0000 UTC m=+0.162084939 container attach 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 01 14:22:53 compute-0 podman[321437]: 2025-10-01 14:22:53.193019877 +0000 UTC m=+0.162592155 container died 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 01 14:22:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 01 14:22:53 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962468013' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 01 14:22:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f5f0d7a307aea69f06273be7ef471412d5a3f365358f27e0b58692a2743439-merged.mount: Deactivated successfully.
Oct 01 14:22:53 compute-0 podman[321437]: 2025-10-01 14:22:53.252517997 +0000 UTC m=+0.222090235 container remove 924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_elion, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:22:53 compute-0 systemd[1]: libpod-conmon-924d4da25976a5e67473d000916a4ec6890c811b9798a040aad68bf29f670174.scope: Deactivated successfully.
Oct 01 14:22:53 compute-0 ceph-mon[74802]: pgmap v2415: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:53 compute-0 ceph-mon[74802]: from='client.15195 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:53 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2997416852' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 01 14:22:53 compute-0 ceph-mon[74802]: from='client.15201 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:53 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/206076676' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 01 14:22:53 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3962468013' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 01 14:22:53 compute-0 podman[321507]: 2025-10-01 14:22:53.418808768 +0000 UTC m=+0.049803623 container create 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 01 14:22:53 compute-0 systemd[1]: Started libpod-conmon-8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736.scope.
Oct 01 14:22:53 compute-0 systemd[1]: Started libcrun container.
Oct 01 14:22:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 01 14:22:53 compute-0 podman[321507]: 2025-10-01 14:22:53.401056825 +0000 UTC m=+0.032051710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 01 14:22:53 compute-0 podman[321507]: 2025-10-01 14:22:53.498351265 +0000 UTC m=+0.129346130 container init 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 01 14:22:53 compute-0 podman[321507]: 2025-10-01 14:22:53.506968208 +0000 UTC m=+0.137963063 container start 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 01 14:22:53 compute-0 podman[321507]: 2025-10-01 14:22:53.510929164 +0000 UTC m=+0.141924019 container attach 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 01 14:22:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 01 14:22:53 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701630211' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 01 14:22:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 01 14:22:53 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144850208' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 01 14:22:53 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:53 compute-0 crontab[321653]: (root) LIST (root)
Oct 01 14:22:53 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 01 14:22:53 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2033895320' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 01 14:22:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 01 14:22:54 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490354223' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 01 14:22:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 01 14:22:54 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266640394' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 01 14:22:54 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1701630211' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 01 14:22:54 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4144850208' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 01 14:22:54 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2033895320' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 01 14:22:54 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2490354223' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 01 14:22:54 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2266640394' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]: {
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:     "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982": {
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "osd_id": 0,
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "osd_uuid": "1d0ec2e9-41d6-4ce3-beeb-09dc3cc45982",
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "type": "bluestore"
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:     },
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:     "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9": {
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "osd_id": 2,
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "osd_uuid": "c4c937e2-a8a8-47c3-af37-fdedb6fff1f9",
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "type": "bluestore"
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:     },
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:     "f5852bc7-e830-489a-b8a9-42dfbbe71426": {
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "ceph_fsid": "eb4b6ead-01d1-53b3-a52a-47dcc600555f",
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "osd_id": 1,
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "osd_uuid": "f5852bc7-e830-489a-b8a9-42dfbbe71426",
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:         "type": "bluestore"
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]:     }
Oct 01 14:22:54 compute-0 stupefied_mccarthy[321540]: }
Oct 01 14:22:54 compute-0 systemd[1]: libpod-8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736.scope: Deactivated successfully.
Oct 01 14:22:54 compute-0 podman[321507]: 2025-10-01 14:22:54.434433286 +0000 UTC m=+1.065428131 container died 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 01 14:22:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 01 14:22:54 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842463792' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 01 14:22:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fda86f7ca19c2dfbfed63e24057cedfdb38a41f027f7ee2a603f9c9ae0f31eb-merged.mount: Deactivated successfully.
Oct 01 14:22:54 compute-0 podman[321507]: 2025-10-01 14:22:54.492690296 +0000 UTC m=+1.123685151 container remove 8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mccarthy, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 01 14:22:54 compute-0 systemd[1]: libpod-conmon-8cbfdb61071bcbafbd24da419ae1cf4c1edf560be01cc2e3bb8b6d8b73626736.scope: Deactivated successfully.
Oct 01 14:22:54 compute-0 sudo[321327]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct 01 14:22:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:22:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct 01 14:22:54 compute-0 ceph-mon[74802]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:22:54 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 706ee2d7-79b4-4f58-8d5b-a90b62ae4702 does not exist
Oct 01 14:22:54 compute-0 ceph-mgr[75103]: [progress WARNING root] complete: ev 9acf9a75-20f7-4a14-beba-c8578f8043df does not exist
Oct 01 14:22:54 compute-0 sudo[321814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 01 14:22:54 compute-0 sudo[321814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:54 compute-0 sudo[321814]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:54 compute-0 sudo[321842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 01 14:22:54 compute-0 sudo[321842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 01 14:22:54 compute-0 sudo[321842]: pam_unix(sudo:session): session closed for user root
Oct 01 14:22:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 01 14:22:54 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143039329' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 01 14:22:54 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 01 14:22:54 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/260546938' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:17.429589+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79568896 unmapped: 23986176 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:18.429789+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:19.429950+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:20.430079+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:21.430197+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:22.430374+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:23.430608+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:24.430866+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:25.431120+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:26.431294+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:27.431482+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:28.431695+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:29.431861+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:30.432042+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 7951 writes, 30K keys, 7951 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7951 writes, 1749 syncs, 4.55 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 740 writes, 1899 keys, 740 commit groups, 1.0 writes per commit group, ingest: 1.08 MB, 0.00 MB/s
                                           Interval WAL: 740 writes, 319 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:31.432179+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:32.432342+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:33.432577+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:34.432846+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:35.433143+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:36.433319+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:37.433459+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:38.433593+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:39.433711+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:40.433891+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:41.434145+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:42.434352+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:43.434580+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:44.434804+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:45.435027+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:46.435153+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:47.435301+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:48.435468+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:49.435643+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:50.435841+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:51.436132+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:52.436375+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:53.437116+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:54.437252+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:55.437441+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:56.437603+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:57.437829+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:58.437995+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:59.438147+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:00.438318+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:01.438580+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:02.438866+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:03.440236+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:04.440516+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:05.440813+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:06.441052+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:07.441305+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:08.441565+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:09.441904+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:10.442200+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:11.442491+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:12.442657+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:13.442791+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:14.442956+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:15.443152+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:16.443310+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:17.443486+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:18.443643+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:19.443784+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:20.443940+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:21.444095+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:22.444249+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:23.444461+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:24.444582+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:25.444722+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:26.444897+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:27.445081+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:28.445225+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:29.445369+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:30.445482+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:31.445632+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:32.445789+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:33.445898+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:34.446071+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099284 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:35.446222+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79577088 unmapped: 23977984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 217.385223389s of 217.396347046s, submitted: 13
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:36.446351+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79675392 unmapped: 23879680 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:37.446494+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:38.446593+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:39.446690+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:40.446817+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:41.446989+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:42.447117+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:43.447244+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:44.447400+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:45.447603+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:46.447776+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:47.447919+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:48.448058+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:49.448223+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:50.448383+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:51.448534+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:52.448679+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:53.448861+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:54.449029+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:55.449181+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:56.449318+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:57.449477+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:58.449640+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:59.449793+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:00.449981+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:01.450126+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:02.450280+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:03.450425+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:04.450573+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:05.451283+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:06.451421+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:07.451559+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:08.451723+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:09.451860+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:10.452050+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:11.452193+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:12.452359+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:13.452533+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:14.452702+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:15.452956+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:16.453127+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:17.453290+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:18.453469+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:19.453632+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:20.453797+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:21.453950+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:22.454096+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:23.454283+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:24.454422+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:25.454605+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:26.454826+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:27.454967+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:28.455134+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:29.455273+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:30.455460+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:31.455625+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:32.455846+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:33.456007+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:34.456154+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:35.456373+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:36.456561+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:37.456767+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:38.456973+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:39.457126+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:40.457283+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:41.457429+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:42.457590+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:43.457803+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:44.457955+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:45.458359+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:46.458561+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:47.458798+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:48.458971+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:49.459105+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:50.459336+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:51.459503+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:52.459679+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:53.459831+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:54.459968+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:55.460198+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:56.460431+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:57.460602+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:58.460807+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:59.460949+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:00.461137+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:01.461351+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:02.461517+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:03.461677+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:04.461847+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:05.462053+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:06.462862+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:07.463086+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:08.463260+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:09.463443+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:10.463605+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098404 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:11.463786+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:12.463917+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:13.464604+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:14.464841+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 heartbeat osd_stat(store_statfs(0x4fb5b7000/0x0/0x4ffc00000, data 0x1167e3d/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 79749120 unmapped: 23805952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:15.465021+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.143211365s of 99.444923401s, submitted: 90
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105670 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 161 ms_handle_reset con 0x55f3e051d800 session 0x55f3e07763c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80879616 unmapped: 22675456 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:16.465200+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3df702400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 162 ms_handle_reset con 0x55f3df702400 session 0x55f3e07b61e0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 22659072 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:17.465420+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 163 ms_handle_reset con 0x55f3e051d800 session 0x55f3e07b65a0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80846848 unmapped: 22708224 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:18.465589+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e066d400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 164 ms_handle_reset con 0x55f3e066d400 session 0x55f3e05541e0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 22700032 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:19.465766+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e066d800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 165 ms_handle_reset con 0x55f3e066d800 session 0x55f3e05545a0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 22667264 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:20.465943+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 165 heartbeat osd_stat(store_statfs(0x4fb5a1000/0x0/0x4ffc00000, data 0x11702cf/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135560 data_alloc: 218103808 data_used: 409600
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e066dc00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 22642688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:21.466132+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 166 ms_handle_reset con 0x55f3e066dc00 session 0x55f3e0776780
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:22.466340+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 22634496 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fb598000/0x0/0x4ffc00000, data 0x1173a1f/0x1275000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3df702400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 166 heartbeat osd_stat(store_statfs(0x4fb598000/0x0/0x4ffc00000, data 0x1173a1f/0x1275000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:23.466505+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 22601728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 167 ms_handle_reset con 0x55f3df702400 session 0x55f3e07b61e0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:24.466675+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 22593536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fb594000/0x0/0x4ffc00000, data 0x117559c/0x1278000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 167 heartbeat osd_stat(store_statfs(0x4fb594000/0x0/0x4ffc00000, data 0x117559c/0x1278000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e066d400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:25.466956+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 22568960 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.791228294s of 10.066333771s, submitted: 80
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226051 data_alloc: 218103808 data_used: 430080
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 168 ms_handle_reset con 0x55f3e051d800 session 0x55f3e07c50e0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:26.467081+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 92790784 unmapped: 10764288 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e066d800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 169 ms_handle_reset con 0x55f3e066d800 session 0x55f3e07c5c20
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:27.467217+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 19021824 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 170 ms_handle_reset con 0x55f3e066d400 session 0x55f3e079f2c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f63ef000/0x0/0x4ffc00000, data 0x5178ceb/0x527b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 170 ms_handle_reset con 0x55f3e051d400 session 0x55f3e081e780
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3df702400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:28.467382+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 18989056 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 171 ms_handle_reset con 0x55f3df702400 session 0x55f3e081ed20
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 171 ms_handle_reset con 0x55f3e051d800 session 0x55f3e08343c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:29.467534+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 18907136 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e066d400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e066d800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 172 ms_handle_reset con 0x55f3e066d400 session 0x55f3e0837e00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:30.467677+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 18874368 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 172 handle_osd_map epochs [173,173], i have 173, src has [1,173]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168325 data_alloc: 218103808 data_used: 442368
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 173 ms_handle_reset con 0x55f3e051d000 session 0x55f3e081e780
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 173 ms_handle_reset con 0x55f3e066d800 session 0x55f3e08352c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 173 heartbeat osd_stat(store_statfs(0x4fa3ee000/0x0/0x4ffc00000, data 0x117e5fe/0x127e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:31.467864+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 18857984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 173 heartbeat osd_stat(store_statfs(0x4fa3ee000/0x0/0x4ffc00000, data 0x117e5fe/0x127e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:32.467994+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 18857984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:33.468107+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 18841600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:34.468246+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 18841600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 174 heartbeat osd_stat(store_statfs(0x4fa3ea000/0x0/0x4ffc00000, data 0x118009d/0x1281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:35.468422+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 18841600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3df702400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.991490364s of 10.120883942s, submitted: 285
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170675 data_alloc: 218103808 data_used: 450560
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 175 ms_handle_reset con 0x55f3df702400 session 0x55f3e0523680
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:36.468622+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 175 heartbeat osd_stat(store_statfs(0x4fa3e8000/0x0/0x4ffc00000, data 0x1181c65/0x1285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:37.468811+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 176 heartbeat osd_stat(store_statfs(0x4fa3e8000/0x0/0x4ffc00000, data 0x1181c65/0x1285000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:38.468948+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:39.469085+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:40.469244+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175743 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:41.469405+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 18825216 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:42.469536+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 177 ms_handle_reset con 0x55f3e051d000 session 0x55f3e0523c20
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:43.469722+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 177 heartbeat osd_stat(store_statfs(0x4fa3e3000/0x0/0x4ffc00000, data 0x11852ca/0x128a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:44.469885+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:45.470079+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178004 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:46.470263+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 177 heartbeat osd_stat(store_statfs(0x4fa3e3000/0x0/0x4ffc00000, data 0x11852ca/0x128a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:47.470466+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:48.470632+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:49.470821+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:50.470972+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178004 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:51.471122+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 177 heartbeat osd_stat(store_statfs(0x4fa3e3000/0x0/0x4ffc00000, data 0x11852ca/0x128a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:52.471313+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.752992630s of 16.829257965s, submitted: 63
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:53.471504+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:54.471678+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:55.471921+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:56.472117+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:57.472317+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:58.472495+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:59.472654+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:00.472831+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:01.473008+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:02.473194+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:03.473374+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:04.473506+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:05.473719+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:06.473940+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:07.474112+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:08.474270+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:09.474411+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:10.474604+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:11.474859+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:12.475046+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:13.475227+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:14.475391+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:15.475581+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:16.475756+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:17.475904+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:18.476116+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:19.476318+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:20.476469+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:21.476585+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:22.476822+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:23.476958+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:24.477147+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:25.477586+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:26.477774+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:27.477978+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:28.478184+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:29.478347+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:30.478490+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:31.478824+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:32.479042+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:33.479191+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:34.479317+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:35.479536+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:36.480699+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:37.481447+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:38.482167+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:39.482381+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:40.483505+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:41.484340+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:42.484791+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:43.485116+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:44.485446+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:45.485721+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:46.485944+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:47.486264+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:48.486647+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:49.486854+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 18808832 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:50.487172+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:51.487341+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:52.487573+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:53.487829+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:54.488062+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:55.488292+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:56.488506+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:57.488707+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:58.488988+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:59.489142+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:00.489346+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:01.489523+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:02.489719+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:03.489915+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:04.490082+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:05.490266+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:06.490453+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:07.490617+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:08.490829+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 01 14:22:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1238550680' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:09.490980+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:10.491127+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:11.491322+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:12.491500+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:13.491697+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:14.491901+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:15.492153+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:16.492324+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:17.492450+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:18.492616+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:19.492818+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:20.493035+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:21.493194+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:22.493374+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:23.493597+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:24.493816+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:25.494020+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:26.494202+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:27.494331+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:28.494506+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:29.494661+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:30.494911+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 18800640 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets getting new tickets!
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:31.495178+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _finish_auth 0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:31.496086+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:32.495967+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:33.496148+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:34.496301+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:35.496497+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 18784256 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180978 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 ms_handle_reset con 0x55f3df702800 session 0x55f3e04c2f00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:36.496642+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: mgrc ms_handle_reset ms_handle_reset con 0x55f3dd734c00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2102413293
Oct 01 14:22:55 compute-0 ceph-osd[89484]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2102413293,v1:192.168.122.100:6801/2102413293]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: get_auth_request con 0x55f3e066d800 auth_method 0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: mgrc handle_mgr_configure stats_period=5
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 18685952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 heartbeat osd_stat(store_statfs(0x4fa3e0000/0x0/0x4ffc00000, data 0x1186d2d/0x128d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:37.496809+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 ms_handle_reset con 0x55f3e066c800 session 0x55f3e04d4f00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e066d400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 ms_handle_reset con 0x55f3e066cc00 session 0x55f3e04c30e0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051cc00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 18685952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:38.496945+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 18685952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:39.497074+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 18685952 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051c800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 107.484535217s of 107.496047974s, submitted: 13
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:40.497246+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 179 ms_handle_reset con 0x55f3e051c800 session 0x55f3e0854000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 18661376 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189769 data_alloc: 218103808 data_used: 462848
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:41.497475+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051c400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 179 ms_handle_reset con 0x55f3e051c400 session 0x55f3e077da40
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3de078800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 179 ms_handle_reset con 0x55f3de078800 session 0x55f3e05314a0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 18661376 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 179 heartbeat osd_stat(store_statfs(0x4fa3da000/0x0/0x4ffc00000, data 0x1188cf0/0x1293000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3df702400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:42.497653+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 179 heartbeat osd_stat(store_statfs(0x4fa3da000/0x0/0x4ffc00000, data 0x1188cf0/0x1293000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 180 ms_handle_reset con 0x55f3df702400 session 0x55f3de5954a0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 18604032 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051c400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 180 ms_handle_reset con 0x55f3e051c400 session 0x55f3e0776f00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051c800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 180 ms_handle_reset con 0x55f3e051c800 session 0x55f3e04d45a0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:43.497834+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 18604032 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 180 heartbeat osd_stat(store_statfs(0x4fa3d6000/0x0/0x4ffc00000, data 0x118a8a0/0x1298000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:44.497990+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 181 ms_handle_reset con 0x55f3e051d000 session 0x55f3dfee5860
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 18595840 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e0538000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:45.498209+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 182 ms_handle_reset con 0x55f3e0538000 session 0x55f3de0aa000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199159 data_alloc: 218103808 data_used: 471040
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:46.498342+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 182 heartbeat osd_stat(store_statfs(0x4fa3d1000/0x0/0x4ffc00000, data 0x118dbc9/0x1299000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:47.498450+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:48.498618+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:49.498802+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 18546688 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3df702400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.709639549s of 10.005803108s, submitted: 76
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:50.498979+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 182 heartbeat osd_stat(store_statfs(0x4fa3d4000/0x0/0x4ffc00000, data 0x118dbec/0x129a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 18538496 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 183 ms_handle_reset con 0x55f3df702400 session 0x55f3dd26b860
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204181 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:51.499118+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051c400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 18530304 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 184 ms_handle_reset con 0x55f3e051c400 session 0x55f3dff43c20
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:52.499278+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:53.499441+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 184 heartbeat osd_stat(store_statfs(0x4fa3cd000/0x0/0x4ffc00000, data 0x1191343/0x129f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:54.499666+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 185 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1192dc2/0x12a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:55.499925+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208193 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:56.500173+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:57.500337+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:58.500521+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:59.500702+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:00.500880+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:01.501061+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:02.501288+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:03.501468+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:04.501678+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:05.502101+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:06.502310+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:07.502421+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:08.502587+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:09.502829+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:10.503050+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:11.503252+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:12.503392+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:13.503560+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:14.503698+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:15.503914+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:16.504038+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:17.504149+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:18.504267+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:19.504359+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:20.504547+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:21.504672+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:22.504773+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:23.504953+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:24.505067+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:25.505248+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:26.505359+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:27.505495+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:28.505643+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:29.505883+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:30.505991+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:31.506105+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:32.506244+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:33.506452+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:34.506574+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:35.506801+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:36.506961+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:37.507162+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:38.507369+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:39.507565+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:40.507715+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:41.507905+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:42.508076+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:43.508222+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:44.508435+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:45.508623+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:46.508770+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:47.508902+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:48.509198+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:49.509412+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:50.509641+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:51.510034+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:52.510184+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:53.510920+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:54.511444+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:55.511762+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:56.511887+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:57.511940+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:58.512072+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:59.512218+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:00.512588+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:01.512893+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:02.513052+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:03.513191+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:04.513301+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:05.513463+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:06.513597+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:07.513878+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:08.514011+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:09.514134+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:10.514313+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:11.514516+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:12.514808+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:13.514985+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:14.515982+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:15.516236+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:16.516381+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:17.516582+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:18.516793+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:19.517011+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:20.517214+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:21.517455+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:22.517665+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:23.517857+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:24.518045+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:25.518259+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:26.518448+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:27.518610+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:28.518804+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:29.518947+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:30.519169+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:31.519389+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:32.519549+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:33.519695+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:34.519835+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:35.520040+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:36.520196+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:37.520358+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:38.520628+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:39.520779+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:40.521003+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:41.521197+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:42.521454+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:43.521724+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:44.521983+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:45.523022+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:46.523169+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:47.523328+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:48.523436+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:49.523559+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:50.523713+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:51.523903+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:52.524094+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:53.524285+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:54.524446+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:55.524670+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:56.524809+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:57.524985+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:58.525146+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:59.525349+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:00.525528+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:01.525714+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:02.525927+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:03.526124+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:04.526311+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:05.526471+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:06.526609+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:07.526809+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:08.527021+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:09.527202+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:10.527371+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:11.527577+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:12.527758+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:13.527927+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:14.528125+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:15.528318+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:16.528480+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:17.528682+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:18.528906+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:19.529078+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:20.529254+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:21.529405+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:22.530309+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:23.530996+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:24.531476+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:25.531788+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:26.532104+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:27.532368+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:28.532634+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:29.532839+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:30.533019+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:31.533188+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:32.533403+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:33.533564+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:34.533767+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:35.534022+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:36.534217+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:37.534373+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:38.534606+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:39.534802+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:40.534946+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:41.535167+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:42.535411+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:43.535652+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:44.535886+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:45.536116+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:46.536369+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:47.536515+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:48.536697+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:49.536780+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:50.537078+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:51.537367+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:52.537601+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:53.537787+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:54.538064+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:55.538278+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:56.538559+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:57.538706+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:58.538900+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:59.539060+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:00.539192+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:01.539353+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:02.539525+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:03.539698+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:04.539886+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:05.540258+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:06.540545+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:07.540782+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:08.541005+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:09.541197+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:10.541386+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:11.541551+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:12.541763+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:13.541995+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:14.542173+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:15.542569+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:16.542806+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:17.542966+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 18522112 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:18.543112+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 18513920 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:19.543302+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 18513920 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:20.543458+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 18513920 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:21.543625+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:22.543838+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:23.544098+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:24.544319+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:25.544550+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:26.544788+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:27.544972+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:28.545165+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:29.545337+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:30.545541+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:31.545654+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:32.545866+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:33.546012+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:34.546211+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:35.546416+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:36.546579+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:37.546776+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:38.546936+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:39.547109+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:40.547273+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:41.547407+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:42.547558+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:43.547722+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:44.547959+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:45.548199+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:46.548348+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:47.548485+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:48.548643+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:49.548772+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:50.548966+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:51.549131+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:52.549273+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:53.549395+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:54.549532+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:55.549773+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:56.549913+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:57.550044+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:58.550218+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:59.550406+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:00.550558+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:01.550704+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:02.550849+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:03.550995+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:04.551150+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:05.551350+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:06.551473+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:07.551609+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:08.551720+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:09.551905+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:10.552059+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:11.552243+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:12.552399+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:13.552558+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:14.552716+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:15.552939+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:16.553111+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:17.553280+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:18.553419+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:19.553703+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:20.553811+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:21.553978+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:22.554161+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:23.554307+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:24.554486+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:25.554671+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:26.554841+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:27.554995+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:28.555149+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:29.555657+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:30.555807+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 9156 writes, 34K keys, 9156 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9156 writes, 2284 syncs, 4.01 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1205 writes, 3436 keys, 1205 commit groups, 1.0 writes per commit group, ingest: 1.86 MB, 0.00 MB/s
                                           Interval WAL: 1205 writes, 535 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:31.556001+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:32.556171+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:33.556316+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:34.556475+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:35.556702+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:36.556820+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:37.556971+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:38.557103+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:39.557598+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:40.557786+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:41.557933+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:42.558194+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:43.558356+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:44.558481+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:45.558666+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:46.558825+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:47.559021+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:48.559197+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:49.559398+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:50.559533+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:51.559687+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 18505728 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:52.559842+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:53.559997+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:54.560195+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:55.560523+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:56.560696+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:57.560885+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:58.561073+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:59.561211+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:00.561410+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:01.561584+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:02.561778+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:03.561901+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:04.562047+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:05.562218+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:06.562395+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:07.562505+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:08.562701+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:09.562858+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:10.563066+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:11.563230+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:12.563360+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:13.563517+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:14.563661+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:15.563851+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:16.563989+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:17.564132+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:18.564279+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:19.564420+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:20.564578+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:21.564784+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:22.564895+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:23.565018+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:24.565166+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:25.565340+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:26.565496+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:27.565610+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:28.565774+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:29.565906+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:30.566073+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:31.566251+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:32.566370+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210319 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:33.566491+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:34.566645+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:35.566830+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 346.045837402s of 346.194610596s, submitted: 63
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:36.567008+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 18497536 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:37.567158+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 18391040 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,1])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:38.567317+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:39.567471+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:40.567662+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:41.567844+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:42.568038+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:43.568203+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:44.568370+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:45.568594+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:46.568855+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:47.569033+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:48.569430+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:49.569589+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:50.569811+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:51.569966+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:52.570136+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:53.570308+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:54.570481+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:55.570677+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:56.570819+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:57.570984+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:58.571139+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:59.571311+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:00.571491+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:01.571658+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:02.571807+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:03.571996+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:04.572181+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:05.572354+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:06.572524+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:07.572672+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:08.572845+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:09.573004+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:10.573204+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:11.573406+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:12.573587+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:13.573761+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:14.573892+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:15.574099+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:16.574254+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:17.574419+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:18.574579+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:19.574754+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:20.574939+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:21.575189+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:22.575396+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:23.575574+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:24.575797+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:25.575974+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:26.576108+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:27.576247+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:28.576417+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:29.576604+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:30.576788+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:31.577016+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:32.577166+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:33.577344+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:34.577487+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:35.577664+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:36.577829+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:37.578019+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:38.578169+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:39.578517+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:40.579253+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:41.580690+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:42.580902+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:43.581093+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:44.581305+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:45.582373+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:46.582560+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:47.582992+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:48.583146+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:49.583535+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:50.584014+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:51.584336+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:52.584477+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:53.584817+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:54.585023+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:55.585389+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:56.585552+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:57.585794+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:58.585929+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:59.586121+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:00.586338+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:01.586534+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:02.586689+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:03.586842+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:04.586973+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:05.587155+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:06.587347+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:07.587509+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:08.587712+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:09.587926+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:10.588129+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:11.588280+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:12.588441+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:13.588716+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:14.588966+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:15.589330+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:16.589551+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:17.589899+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 18374656 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:18.590155+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:19.590453+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:20.590695+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:21.590963+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:22.591130+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:23.591332+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:24.591511+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:25.591679+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:26.592050+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:27.592213+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:28.592387+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:29.592584+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:30.592805+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:31.593057+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:32.593348+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:33.593593+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:34.593783+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:35.614649+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:36.614809+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:37.614977+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:38.615136+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:39.615449+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:40.615582+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:41.615721+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:42.615948+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:43.616169+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:44.616365+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:45.616850+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:46.617214+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:47.617957+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:48.618491+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:49.618721+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:50.619107+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:51.619427+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:52.619775+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:53.620075+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:54.620279+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:55.620539+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:56.620699+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:57.620937+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:58.621210+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:59.621413+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:00.621635+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:01.621856+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:02.622070+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:03.622294+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:04.624206+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:05.624434+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:06.624669+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:07.624871+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:08.625117+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:09.625316+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:10.625488+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:11.625701+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:12.625919+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:13.626080+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:14.626318+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:15.626550+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:16.626716+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:17.626938+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:18.627114+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:19.627281+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:20.627496+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:21.627661+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:22.627833+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:23.628159+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:24.628369+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:25.628598+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:26.628816+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:27.628953+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:28.629101+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:29.629232+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:30.629346+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:31.629480+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:32.629635+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:33.629806+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:34.629970+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:35.630182+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:36.630369+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:37.630549+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:38.630837+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:39.630995+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:40.631143+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:41.631346+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:42.631499+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:43.631633+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:44.631826+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:45.632038+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:46.632177+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:47.632306+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:48.632485+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:49.632631+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:50.632875+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:51.633029+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:52.633149+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:53.633284+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:54.633411+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:55.633642+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:56.633811+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:57.633961+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:58.634120+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:59.634256+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:00.634436+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:01.634624+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:02.634813+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:03.634996+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:04.635160+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:05.635301+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:06.635399+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:07.635520+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:08.635680+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:09.635869+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:10.636038+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:11.636221+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:12.636361+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:13.636508+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:14.636655+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:15.636826+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:16.636960+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:17.637141+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:18.637341+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:19.637515+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:20.637651+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:21.637756+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:22.638037+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:23.638159+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:24.638309+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:25.638536+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:26.638846+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:27.639040+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:28.639192+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:29.639349+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:30.639513+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:31.639685+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:32.639940+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:33.640114+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:34.640284+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:35.640504+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:36.640692+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:37.640872+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:38.641046+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:39.641197+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:40.641347+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:41.641514+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:42.641710+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:43.641999+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:44.642230+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:45.642419+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:46.642576+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:47.642715+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:48.642934+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:49.643065+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:50.643199+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:51.643387+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:52.643560+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:53.643779+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:54.643933+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:55.644090+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:56.644213+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:57.644386+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:58.644587+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:59.644807+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:00.644971+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 18366464 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:01.645148+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:02.645310+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:03.645444+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:04.645589+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:05.645926+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:06.646047+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:07.646164+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:08.646311+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:09.646489+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:10.646613+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:11.646704+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:12.646894+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:13.647035+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:14.647183+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:15.647391+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:16.647521+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:17.647711+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:18.647858+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:19.647996+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:20.648208+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:21.648333+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:22.648490+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:23.648589+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:24.648713+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:25.649056+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:26.649282+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:27.649462+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:28.649715+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:29.649963+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:30.650077+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:31.650196+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:32.650343+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:33.650518+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:34.650673+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:35.650964+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 18358272 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:36.651140+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:37.651350+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:38.651537+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:39.651692+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:40.651836+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:41.652000+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:42.652163+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:43.652436+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:44.652606+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:45.652803+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:46.652962+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:47.653111+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:48.682217+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:49.682361+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:50.682513+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:51.682721+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:52.682957+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:53.683132+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:54.683279+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:55.683498+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:56.683669+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:57.683832+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:58.684139+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:59.684339+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:00.684579+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:01.684866+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:02.685064+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:03.685281+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:04.685456+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:05.685835+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:06.686060+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:07.686259+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:08.686539+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:09.686847+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:10.687007+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:11.687187+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:12.687346+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:13.687536+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:14.687661+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:15.687781+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:16.687902+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:17.688065+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:18.688303+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:19.688501+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:20.688706+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:21.688887+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:22.689296+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:23.689477+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:24.689721+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:25.690056+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:26.690237+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:27.690420+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:28.690629+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:29.690822+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:30.691381+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:31.691635+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:32.691906+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:33.692857+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:34.693456+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:35.694214+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:36.694620+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:37.695256+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:38.695904+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:39.696490+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:40.696995+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:41.697357+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:42.697698+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:43.698290+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:44.698669+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:45.699158+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:46.699423+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:47.699785+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:48.700071+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:49.700364+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:50.700633+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:51.700845+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:52.701038+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:53.701244+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:54.701514+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:55.701856+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:56.702046+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:57.702230+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:58.702374+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:59.702554+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:00.702893+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:01.703187+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:02.703907+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:03.704310+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:04.704518+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:05.704810+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:06.705037+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:07.705202+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:08.705451+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:09.705684+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:10.705929+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:11.706105+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:12.706261+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:13.706443+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:14.706821+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:15.707013+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:16.707276+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:17.707515+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:18.707784+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:19.708115+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:20.708384+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:21.708652+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:22.708885+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:23.709062+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:24.709253+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:25.709408+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:26.709567+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:27.709750+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:28.709910+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:29.710059+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:30.710246+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:31.710407+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:32.710607+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:33.710876+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:34.711666+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:35.711832+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:36.712010+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:37.712199+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:38.712314+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209439 data_alloc: 218103808 data_used: 479232
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:39.712506+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:40.712704+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:41.712940+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 heartbeat osd_stat(store_statfs(0x4fa3c9000/0x0/0x4ffc00000, data 0x1194825/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:42.713159+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:43.713326+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051c800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 427.017700195s of 427.785858154s, submitted: 90
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1209109 data_alloc: 218103808 data_used: 483328
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 18350080 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:44.713466+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 187 ms_handle_reset con 0x55f3e051c800 session 0x55f3e07c45a0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 187 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x11962b2/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:45.713699+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:46.713850+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:47.713939+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 187 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x11962b2/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:48.714123+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210886 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 187 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x11962b2/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:49.714301+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:50.714490+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:51.714683+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 18300928 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:52.714845+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c8000/0x0/0x4ffc00000, data 0x11962b2/0x12a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:53.715041+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:54.715214+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:55.715371+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:56.715508+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:57.715697+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:58.715854+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:59.716006+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:00.716172+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:01.716331+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:02.716495+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:03.716659+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:04.716866+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:05.717050+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:06.717222+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:07.717389+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:08.717575+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:09.717832+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:10.718027+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:11.718229+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:12.718367+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:13.718551+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:14.718720+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:15.718964+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:16.719125+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:17.719276+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:18.719419+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:19.719596+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:20.719830+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:21.720034+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:22.720212+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:23.720402+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:24.720551+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:25.720807+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:26.720975+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:27.721276+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:28.721505+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:29.721678+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:30.721838+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:31.721968+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:32.722108+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:33.722266+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:34.722413+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:35.722596+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:36.722816+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:37.722977+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:38.723235+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:39.723817+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:40.724235+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:41.724607+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:42.724920+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:43.725192+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:44.725401+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:45.725661+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:46.725873+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:47.726027+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:48.726376+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:49.726701+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:50.726919+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:51.727086+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:52.727328+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:53.727685+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:54.728059+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:55.728303+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:56.728442+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:57.728670+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:58.728836+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:59.729108+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:00.729255+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:01.729481+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:02.729693+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:03.729892+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:04.730065+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:05.730319+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:06.730504+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:07.730686+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:08.730854+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:09.731005+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:10.731197+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:11.731382+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:12.731546+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:13.731770+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:14.731930+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:15.732122+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:16.732273+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:17.732444+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:18.732675+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:19.732886+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:20.733047+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:21.733221+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:22.733391+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:23.733546+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:24.733697+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:25.734019+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:26.734233+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:27.734372+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:28.734536+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:29.734847+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:30.735077+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 9426 writes, 34K keys, 9426 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9426 writes, 2411 syncs, 3.91 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 270 writes, 502 keys, 270 commit groups, 1.0 writes per commit group, ingest: 0.19 MB, 0.00 MB/s
                                           Interval WAL: 270 writes, 127 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:31.735281+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:32.735469+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:33.735631+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:34.735829+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213860 data_alloc: 218103808 data_used: 487424
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:35.736014+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 18292736 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e051d000 session 0x55f3e07c52c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e0538400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e051d800 session 0x55f3e08372c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3df702400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e0538400 session 0x55f3e04ac1e0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:36.736167+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e066d400 session 0x55f3e0522000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051c400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 ms_handle_reset con 0x55f3e051cc00 session 0x55f3e0837c20
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051c800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:37.736341+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:38.736461+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:39.736638+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254980 data_alloc: 234881024 data_used: 14123008
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:40.736777+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:41.737002+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 heartbeat osd_stat(store_statfs(0x4fa3c5000/0x0/0x4ffc00000, data 0x1197d15/0x12a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:42.737219+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:43.737380+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 120.243713379s of 120.506912231s, submitted: 53
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:44.737539+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254106 data_alloc: 234881024 data_used: 14118912
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 3481600 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:45.737705+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 189 ms_handle_reset con 0x55f3e051d000 session 0x55f3e08363c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 15040512 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:46.737887+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 15040512 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 189 heartbeat osd_stat(store_statfs(0x4fb3c3000/0x0/0x4ffc00000, data 0x1998c3/0x2aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:47.738036+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88514560 unmapped: 15040512 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e0538800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:48.738161+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 190 heartbeat osd_stat(store_statfs(0x4fb3c0000/0x0/0x4ffc00000, data 0x19b494/0x2ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 190 ms_handle_reset con 0x55f3e0538800 session 0x55f3dd86d860
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:49.738400+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110437 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:50.738589+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:51.738723+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 190 heartbeat osd_stat(store_statfs(0x4fb3c1000/0x0/0x4ffc00000, data 0x19b461/0x2ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:52.738955+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:53.739151+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:54.739318+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110437 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.687598228s of 10.906754494s, submitted: 69
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:55.739601+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:56.739815+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:57.740012+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 192 heartbeat osd_stat(store_statfs(0x4fb3bf000/0x0/0x4ffc00000, data 0x19cee0/0x2ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:58.740200+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:59.740353+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e0538c00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116193 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:00.740517+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 ms_handle_reset con 0x55f3e0538c00 session 0x55f3e011a3c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:01.740704+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:02.740967+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:03.741131+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:04.741311+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:05.741520+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:06.741713+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:07.741927+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:08.742106+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:09.742300+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:10.742483+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:11.742680+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:12.742939+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:13.743125+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:14.743294+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:15.743470+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:16.743592+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:17.743775+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:18.743917+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:19.744812+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:20.744969+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:21.745115+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:22.745265+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:23.745402+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:24.745585+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:25.745887+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:26.746080+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:27.746265+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:28.746394+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:29.746586+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:30.746795+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:31.746927+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:32.747076+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:33.747211+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:34.747387+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119167 data_alloc: 218103808 data_used: 495616
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 14909440 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.812831879s of 39.855922699s, submitted: 37
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:35.747601+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 14893056 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:36.747720+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 14827520 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:37.747905+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:38.748032+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:39.748154+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:40.748331+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:41.748458+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:42.748609+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:43.748824+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:44.748982+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:45.749151+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:46.749322+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:47.749507+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:48.749716+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:49.749870+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:50.749998+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:51.750222+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:52.750427+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:53.750600+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:54.750810+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:55.751032+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:56.751258+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:57.751395+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:58.751551+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:59.751805+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:00.751935+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:01.752106+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:02.752259+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:03.752423+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:04.752599+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:05.752857+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:06.753112+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:07.753320+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:08.753516+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:09.753682+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:10.753867+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:11.754046+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:12.754218+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:13.754427+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:14.754646+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:15.754955+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:16.755227+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:17.755423+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:18.755589+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:19.755827+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:20.756007+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:21.756138+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:22.756257+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:23.756395+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:24.756560+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:25.756822+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:26.756974+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:27.757128+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:28.757237+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:29.757409+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:30.757591+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:31.757812+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:32.757960+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:33.758144+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:34.758354+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:35.758631+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:36.758803+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:37.758937+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:38.759105+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:39.759248+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1118607 data_alloc: 218103808 data_used: 503808
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:40.759401+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 heartbeat osd_stat(store_statfs(0x4fb3ba000/0x0/0x4ffc00000, data 0x1a04c0/0x2b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:41.759585+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:42.759783+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 14761984 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e0539000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 67.956901550s of 68.342582703s, submitted: 110
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:43.759989+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 14696448 heap: 103555072 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 194 ms_handle_reset con 0x55f3e0539000 session 0x55f3e0523c20
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:44.760146+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 194 heartbeat osd_stat(store_statfs(0x4fb3b9000/0x0/0x4ffc00000, data 0x1a04e3/0x2b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131474 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:45.760334+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 ms_handle_reset con 0x55f3e051d000 session 0x55f3e1f9f4a0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabb3000/0x0/0x4ffc00000, data 0x9a2093/0xaba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:46.760513+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:47.760792+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:48.760947+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:49.761082+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:50.761287+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:51.761475+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:52.761664+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:53.761905+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:54.762074+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:55.762310+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:56.762479+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:57.762654+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:58.762805+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:59.762931+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:00.763080+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:01.763244+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:02.763460+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:03.763628+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:04.763874+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:05.764153+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:06.764444+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:07.764642+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:08.764899+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:09.765079+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191560 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88399872 unmapped: 31940608 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 heartbeat osd_stat(store_statfs(0x4fabae000/0x0/0x4ffc00000, data 0x9a3c33/0xabe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:10.765225+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e0538400
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.289821625s of 27.663368225s, submitted: 38
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:11.765483+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 196 ms_handle_reset con 0x55f3e0538400 session 0x55f3e0530780
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:12.765655+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:13.765812+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:14.765958+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 196 heartbeat osd_stat(store_statfs(0x4fabad000/0x0/0x4ffc00000, data 0x9a5804/0xac1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192230 data_alloc: 218103808 data_used: 512000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:15.766162+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:16.766359+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 196 heartbeat osd_stat(store_statfs(0x4fabad000/0x0/0x4ffc00000, data 0x9a5804/0xac1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88334336 unmapped: 32006144 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:17.766577+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 196 heartbeat osd_stat(store_statfs(0x4fabad000/0x0/0x4ffc00000, data 0x9a5804/0xac1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:18.766794+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:19.766987+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:20.767198+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:21.767386+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:22.767520+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:23.767683+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:24.767832+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88342528 unmapped: 31997952 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:25.768087+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:26.768356+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:27.768622+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:28.768938+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:29.769163+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:30.769468+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:31.769719+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:32.770041+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:33.770284+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:34.770554+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:35.770802+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:36.771042+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:37.771276+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:38.771422+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:39.771600+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:40.771929+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:41.772137+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:42.772330+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:43.772497+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:44.772676+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:45.772946+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:46.773141+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:47.773444+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:48.773680+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:49.773916+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:50.774134+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:51.774340+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:52.774571+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:53.774786+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:54.774936+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:55.775203+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:56.776642+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:57.777914+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:58.778975+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:59.780095+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:00.780276+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:01.781193+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:02.781995+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:03.782227+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:04.782964+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:05.783536+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:06.784084+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:07.784211+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:08.784612+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:09.784821+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:10.785158+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:11.785358+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:12.785505+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:13.785833+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:14.785999+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:15.786398+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:16.786679+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:17.787010+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:18.787365+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:19.787892+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:20.788077+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:21.788318+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:22.788573+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:23.788829+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:24.789009+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:25.789257+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:26.789440+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:27.789599+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:28.789849+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:29.790003+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:30.790129+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:31.790247+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:32.790367+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:33.790503+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:34.790659+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:35.790839+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:36.791017+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:37.791202+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:38.791349+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:39.791502+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:40.791678+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:41.791832+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:42.791951+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:43.792085+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:44.792285+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:45.792444+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:46.792558+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:47.792654+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:48.792800+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:49.792975+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:50.793116+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:51.793314+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:52.793516+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:53.793774+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:54.794003+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:55.794249+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:56.794402+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:57.794585+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:58.794789+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:59.794980+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:00.795108+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:01.795241+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:02.795418+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:03.795570+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:04.795685+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:05.795836+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:06.795994+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:07.796150+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:08.796278+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:09.796417+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:10.796532+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:11.796686+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:12.796790+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:13.796947+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:14.797076+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:15.797243+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:16.797418+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:17.797540+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:18.797711+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:19.797906+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:20.798079+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:21.798273+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:22.798420+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:23.798611+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:24.798794+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:25.798978+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:26.799188+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:27.799332+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:28.799481+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:29.799647+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:30.799840+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:31.800026+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:32.800301+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:33.800479+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:34.801029+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:35.801273+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:36.801542+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:37.801720+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:38.802013+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:39.802177+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:40.802536+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:41.802708+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:42.802974+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:43.803162+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:44.803626+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:45.803829+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:46.804147+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:47.804338+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:48.804508+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:49.804690+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:50.804933+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:51.805100+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:52.805284+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:53.805436+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:54.805601+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:55.806171+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:56.806396+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:57.806589+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:58.806820+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:59.807020+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:00.807205+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:01.807370+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:02.807593+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:03.807843+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:04.808000+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:05.808210+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:06.808399+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:07.808541+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:08.808698+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:09.808836+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:10.808947+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:11.809088+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:12.809224+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:13.809315+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:14.809467+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:15.809667+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:16.809816+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:17.809976+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88350720 unmapped: 31989760 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:18.810145+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:19.810304+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:20.810496+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:21.810630+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:22.810776+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:23.810950+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:24.811109+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:25.811316+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:26.811452+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:27.811618+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:28.811808+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:29.811955+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:30.812086+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:31.812244+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:32.812368+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:33.812546+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:34.812816+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:35.813064+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:36.813220+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:37.813360+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:38.813534+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:39.813698+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:40.815687+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:41.815842+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:42.816017+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:43.816114+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:44.816295+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:45.816539+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:46.816815+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:47.817001+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:48.817184+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:49.817372+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:50.817540+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:51.817655+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:52.817830+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:53.818004+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:54.818166+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:55.818322+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:56.818492+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:57.818697+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:58.818956+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:59.819155+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:00.819300+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:01.819446+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:02.819600+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:03.819779+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:04.819973+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:05.820240+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:06.820400+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:07.820547+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:08.820708+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:09.820820+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:10.821004+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:11.821131+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:12.821257+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:13.821463+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:14.821650+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:15.821894+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:16.822062+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:17.822245+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:18.822419+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:19.822579+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:20.822787+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:21.822990+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:22.823230+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:23.823393+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:24.823603+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:25.823793+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:26.823920+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:27.824065+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:28.824204+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:29.824388+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:30.824616+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:31.824802+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:32.824922+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:33.825090+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:34.825249+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:35.825502+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:36.825692+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:37.825860+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:38.826036+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:39.826164+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:40.826363+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:41.826493+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:42.826629+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:43.826827+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:44.826971+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:45.827207+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:46.827373+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:47.827566+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:48.827827+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:49.827994+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:50.828175+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:51.828296+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:52.828526+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:53.828707+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:54.828892+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:55.829073+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:56.829252+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:57.829421+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:58.829619+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:59.829872+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:00.830026+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:01.830173+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:02.830290+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:03.830441+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:04.830611+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:05.830787+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:06.830946+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:07.831165+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:08.831343+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:09.831492+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:10.831668+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:11.831799+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:12.831969+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:13.832128+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:14.832276+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:15.832429+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:16.832630+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:17.832806+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:18.832991+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:19.833192+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:20.833360+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:21.833502+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:22.833669+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:23.833925+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:24.834141+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:25.834354+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:26.834542+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:27.834788+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:28.834924+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:29.835070+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:30.835232+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:31.835352+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:32.835500+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:33.835680+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:34.835820+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:35.836086+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:36.836203+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:37.836342+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:38.836531+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:39.836692+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:40.836851+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:41.837035+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:42.837176+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:43.837292+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:44.837416+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:45.837551+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:46.837703+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:47.837867+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:48.838032+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:49.838185+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:50.838387+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:51.838587+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:52.838809+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:53.838983+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:54.839198+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:55.839392+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:56.839537+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:57.839706+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:58.839907+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:59.840064+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:00.841284+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:01.841425+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:02.841600+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:03.841722+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:04.841960+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:05.842149+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:06.842308+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:07.842480+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:08.842624+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:09.842801+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:10.842914+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:11.843074+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:12.843311+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:13.843514+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:14.843714+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:15.843991+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:16.844142+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:17.844261+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:18.845524+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:19.845793+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:20.846471+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:21.846660+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:22.846995+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:23.847571+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:24.848099+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:25.848520+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:26.848950+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:27.849340+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:28.849580+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:29.849827+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:30.850072+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:31.850283+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:32.850443+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:33.850606+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:34.850841+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:35.851091+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:36.851225+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:37.851378+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:38.851512+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:39.851676+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:40.851839+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:41.852006+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:42.852407+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:43.852568+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:44.852749+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:45.852942+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:46.853115+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:47.853298+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:48.853490+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:49.853616+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:50.853838+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:51.854021+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:52.854153+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:53.854413+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:54.854593+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:55.855499+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:56.855628+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:57.855777+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:58.855924+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:59.856080+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:00.856205+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:01.856417+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:02.856589+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:03.856781+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:04.857023+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:05.857194+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:06.857391+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:07.857542+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:08.857834+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:09.858034+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:10.858274+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:11.858441+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:12.858603+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:13.858813+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:14.858936+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:15.859151+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:16.859314+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:17.859463+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:18.859683+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:19.859858+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:20.860032+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:21.860174+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:22.860526+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:23.861564+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:24.862543+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:25.863195+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:26.863496+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:27.864193+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:28.864815+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:29.865395+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 9928 writes, 35K keys, 9928 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9928 writes, 2633 syncs, 3.77 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 502 writes, 1216 keys, 502 commit groups, 1.0 writes per commit group, ingest: 0.57 MB, 0.00 MB/s
                                           Interval WAL: 502 writes, 222 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:30.865948+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:31.866168+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:32.866657+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:33.867143+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:34.867605+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:35.868173+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:36.868407+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:37.868685+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:38.868874+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:39.869132+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:40.869336+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:41.869566+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:42.869709+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:43.869954+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:44.870200+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:45.870447+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:46.870896+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:47.871050+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:48.871237+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:49.871511+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:50.871830+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:51.871985+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:52.872158+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:53.872336+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:54.872539+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:55.873080+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 88358912 unmapped: 31981568 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 heartbeat osd_stat(store_statfs(0x4faba9000/0x0/0x4ffc00000, data 0x9a7267/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:56.873564+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e0538800
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196404 data_alloc: 218103808 data_used: 520192
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 465.580749512s of 465.763977051s, submitted: 26
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 30908416 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 198 ms_handle_reset con 0x55f3e0538800 session 0x55f3e066f2c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:57.873715+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 30908416 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:58.873790+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e0538c00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 30908416 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:59.873998+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 198 heartbeat osd_stat(store_statfs(0x4fabaa000/0x0/0x4ffc00000, data 0x9a8dcf/0xac4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 30892032 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:00.874138+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 30892032 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:01.874399+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199672 data_alloc: 218103808 data_used: 540672
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89456640 unmapped: 30883840 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:02.874561+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89456640 unmapped: 30883840 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:03.874766+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 199 heartbeat osd_stat(store_statfs(0x4fb3a8000/0x0/0x4ffc00000, data 0x1aa990/0x2c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 30859264 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:04.874949+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 199 ms_handle_reset con 0x55f3e0538c00 session 0x55f3df5de3c0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 30859264 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:05.875099+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 30859264 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:06.875246+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144499 data_alloc: 218103808 data_used: 536576
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 30859264 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:07.875521+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.906404495s of 10.985466957s, submitted: 94
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 30859264 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:08.875716+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e0539000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 200 heartbeat osd_stat(store_statfs(0x4fb3a4000/0x0/0x4ffc00000, data 0x1ac3f3/0x2c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:09.876015+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _renew_subs
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 ms_handle_reset con 0x55f3e0539000 session 0x55f3e1fb5680
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:10.876355+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fab9f000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:11.876620+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208981 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:12.876963+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:13.877297+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:14.877497+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:15.877765+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:16.877959+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208981 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fab9f000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:17.878159+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:18.878344+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:19.878516+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:20.878654+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fab9f000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:21.878882+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208981 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:22.879047+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fab9f000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:23.879260+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:24.879417+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:25.879642+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fab9f000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:26.879873+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208981 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:27.880035+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:28.880267+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:29.880424+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:30.880584+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:31.880840+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fab9f000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208981 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:32.881029+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fab9f000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:33.881145+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:34.881252+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:35.881425+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fab9f000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89497600 unmapped: 30842880 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.435482025s of 28.492696762s, submitted: 16
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:36.881595+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207253 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 30801920 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:37.881806+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89563136 unmapped: 30777344 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4faba1000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:38.881926+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89604096 unmapped: 30736384 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:39.882070+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fa791000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:40.882229+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:41.882398+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207253 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:42.882535+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:43.882688+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fa791000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:44.882859+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:45.883099+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:46.883287+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207253 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:47.883450+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:48.883559+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:49.883682+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fa791000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:50.883846+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:51.884041+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207253 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:52.884138+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:53.884308+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:54.884498+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fa791000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:55.884795+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fa791000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:56.884925+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207253 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fa791000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:57.885044+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:58.885207+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 heartbeat osd_stat(store_statfs(0x4fa791000/0x0/0x4ffc00000, data 0x9adf80/0xacd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: handle_auth_request added challenge on 0x55f3e051d000
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:59.885365+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 201 handle_osd_map epochs [201,202], i have 201, src has [1,202]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.729925156s of 23.632308960s, submitted: 90
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 202 ms_handle_reset con 0x55f3e051d000 session 0x55f3dfdb4f00
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:00.885583+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:01.886211+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157138 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 202 heartbeat osd_stat(store_statfs(0x4faf8e000/0x0/0x4ffc00000, data 0x1afb41/0x2cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:02.886860+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:03.887234+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:04.887488+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:05.887805+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 202 heartbeat osd_stat(store_statfs(0x4faf8e000/0x0/0x4ffc00000, data 0x1afb41/0x2cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:06.887955+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157138 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:07.888129+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:08.888263+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:09.888482+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 202 heartbeat osd_stat(store_statfs(0x4faf8e000/0x0/0x4ffc00000, data 0x1afb41/0x2cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.744577408s of 10.025424004s, submitted: 17
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:10.888650+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:11.888777+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:12.888901+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:13.889031+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:14.889155+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:15.889885+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:16.890003+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:17.890174+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:18.890381+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:19.890574+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:20.890812+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:21.891081+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1349126885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:22.891282+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:23.891437+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:24.891644+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:25.891869+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:26.892013+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:27.892228+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:28.892411+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:29.892555+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:30.892817+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:31.893026+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 01 14:22:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1349126885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:32.893347+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:33.893855+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:34.894127+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:35.894369+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:36.894552+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:37.894679+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:38.894880+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:39.895062+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:40.895325+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:41.895534+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:42.895692+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:43.895867+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:44.896259+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:45.896512+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 30703616 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:46.896669+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:47.896872+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:48.897020+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:49.897181+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:50.897328+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:51.897506+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:52.897698+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:53.897969+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:54.898168+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:55.899082+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:56.899230+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:57.899401+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:58.899540+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:59.899703+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:00.899986+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:01.900252+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:02.900431+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:03.900575+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:04.900701+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:05.900943+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:06.901104+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:07.901219+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:08.901364+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:09.901449+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:10.902153+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89645056 unmapped: 30695424 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:11.902299+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:12.902453+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:13.902575+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:14.902694+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:15.902895+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:16.903064+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:17.903195+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:18.903309+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:19.903429+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: osd.1 203 heartbeat osd_stat(store_statfs(0x4faf8b000/0x0/0x4ffc00000, data 0x1b15a4/0x2d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:20.903542+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89653248 unmapped: 30687232 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:21.903744+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89808896 unmapped: 30531584 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: do_command 'config diff' '{prefix=config diff}'
Oct 01 14:22:55 compute-0 ceph-osd[89484]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 14:22:55 compute-0 ceph-osd[89484]: do_command 'config show' '{prefix=config show}'
Oct 01 14:22:55 compute-0 ceph-osd[89484]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 14:22:55 compute-0 ceph-osd[89484]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 14:22:55 compute-0 ceph-osd[89484]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:55 compute-0 ceph-osd[89484]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:55 compute-0 ceph-osd[89484]: bluestore.MempoolThread(0x55f3dbeebb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160112 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:55 compute-0 ceph-osd[89484]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 14:22:55 compute-0 ceph-osd[89484]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:22.903902+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 90243072 unmapped: 30097408 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:23.904089+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89972736 unmapped: 30367744 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: tick
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_tickets
Oct 01 14:22:55 compute-0 ceph-osd[89484]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:24.904318+0000)
Oct 01 14:22:55 compute-0 ceph-osd[89484]: prioritycache tune_memory target: 4294967296 mapped: 89956352 unmapped: 30384128 heap: 120340480 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:55 compute-0 ceph-osd[89484]: do_command 'log dump' '{prefix=log dump}'
Oct 01 14:22:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 01 14:22:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1833656121' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-mon[74802]: pgmap v2416: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1842463792' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:22:55 compute-0 ceph-mon[74802]: from='mgr.14132 192.168.122.100:0/2708195751' entity='mgr.compute-0.puxjpb' 
Oct 01 14:22:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/143039329' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/260546938' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1238550680' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 01 14:22:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1349126885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.10:0/1349126885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1833656121' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 01 14:22:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4121017553' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 01 14:22:55 compute-0 rsyslogd[1009]: imjournal from <np0005464214:ceph-osd>: begin to drop messages due to rate-limiting
Oct 01 14:22:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 01 14:22:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1753763942' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 01 14:22:55 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:55 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 01 14:22:55 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047657026' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 14:22:56 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15239 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:56 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 01 14:22:56 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531769907' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 01 14:22:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4121017553' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 01 14:22:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1753763942' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 01 14:22:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2047657026' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 01 14:22:56 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1531769907' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 01 14:22:56 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15243 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:56 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15245 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:56 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15247 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15249 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15251 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:57 compute-0 ceph-mon[74802]: pgmap v2417: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:57 compute-0 ceph-mon[74802]: from='client.15239 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:57 compute-0 ceph-mon[74802]: from='client.15243 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:57 compute-0 ceph-mon[74802]: from='client.15245 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15255 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:57 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 01 14:22:57 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712398242' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15259 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] _maybe_adjust
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:22:57 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:22:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:22:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 01 14:22:58 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470285706' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 01 14:22:58 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15263 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:58 compute-0 ceph-mon[74802]: from='client.15247 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:58 compute-0 ceph-mon[74802]: from='client.15249 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:22:58 compute-0 ceph-mon[74802]: from='client.15251 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:58 compute-0 ceph-mon[74802]: from='client.15255 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:58 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1712398242' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 01 14:22:58 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/470285706' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 01 14:22:58 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 01 14:22:58 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315121491' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 14:22:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 01 14:22:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4276858919' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 01 14:22:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 01 14:22:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-mon[74802]: pgmap v2418: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:22:59 compute-0 ceph-mon[74802]: from='client.15259 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:59 compute-0 ceph-mon[74802]: from='client.15263 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 01 14:22:59 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2315121491' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 01 14:22:59 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4276858919' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 01 14:22:59 compute-0 ceph-mon[74802]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 01 14:22:59 compute-0 ceph-mon[74802]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:12.423585+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:13.423821+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:14.423989+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:15.424200+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:16.424392+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:17.424657+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:18.424888+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:19.425069+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:20.425315+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:21.425535+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 21209088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:22.425844+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:23.426045+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:24.426243+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:25.426522+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6754 writes, 26K keys, 6754 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6754 writes, 1414 syncs, 4.78 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 707 writes, 1753 keys, 707 commit groups, 1.0 writes per commit group, ingest: 0.96 MB, 0.00 MB/s
                                           Interval WAL: 707 writes, 319 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:26.426714+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:27.427013+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:28.427189+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:29.427418+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:30.427633+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:31.427886+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:32.428068+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:33.428235+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:34.428430+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:35.428607+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:36.428827+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:37.429014+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:38.429168+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:39.429343+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:40.429529+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:41.429709+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 21192704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:42.429964+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:43.430157+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:44.430313+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:45.430450+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:46.430636+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:47.430913+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:48.431078+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:49.431234+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:50.431416+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:51.431592+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:52.431848+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:53.432012+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:54.432191+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:55.432378+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:56.432552+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:57.432781+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:58.432933+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:49:59.433075+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:00.433272+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:01.433446+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 21176320 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:02.433630+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:03.433769+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:04.434074+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:05.434209+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 podman[322475]: 2025-10-01 14:22:59.532351488 +0000 UTC m=+0.080023273 container health_status a1dc94033b3cdaf0c1939938a446bc6911aa0e2bf0fa0c22c3e618390e8d96e1 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:afd5d6822b86ea0930b2011fede834bb24495995d7baac03363ab61d89f07a22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=36bccb96575468ec919301205d8daa2c, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20250923)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 podman[322478]: 2025-10-01 14:22:59.532948097 +0000 UTC m=+0.071370968 container health_status dfa509645741d50db256c392f2c7e50ef8a1653f296eb853aefbe3d2ba4746c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250923, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:499c6d82390ee2dbb91628d2e42671406372fb603d697685a04145cf6dd8d0ab', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=36bccb96575468ec919301205d8daa2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:06.434373+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:07.434597+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:08.434823+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:09.435029+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:10.435254+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:11.435465+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:12.435660+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:13.435840+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:14.436032+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:15.436200+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:16.436371+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:17.436571+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 21159936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:18.436901+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 21143552 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:19.437062+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 21143552 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:20.437337+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 21143552 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:21.437538+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 21143552 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:22.437787+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:23.437999+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:24.438151+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:25.438285+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:26.438383+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:27.438538+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:28.438707+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:29.438902+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:30.439100+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:31.439244+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:32.439429+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:33.439529+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:34.439695+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077052 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:35.439812+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 21127168 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 218.706146240s of 218.717651367s, submitted: 9
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:36.439967+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 21061632 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:37.440132+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 21037056 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:38.440240+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 21037056 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:39.440364+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 21037056 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:40.440547+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 21037056 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:41.440724+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 21037056 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:42.440917+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 21028864 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:43.441056+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 21028864 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:44.441214+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 21028864 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:45.441385+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 21028864 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:46.441574+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 21028864 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:47.441767+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:48.441923+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:49.442124+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:50.442391+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:51.442562+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:52.442718+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:53.442890+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:54.443086+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:55.443251+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:56.443418+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:57.443597+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:58.443725+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:50:59.443888+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:00.444121+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:01.444275+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:02.444429+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:03.444577+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:04.444785+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:05.445551+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:06.445713+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:07.445905+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:08.446092+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:09.446809+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:10.447018+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:11.447170+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:12.447314+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:13.447493+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:14.447663+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:15.447817+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:16.447983+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:17.448112+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:18.448269+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:19.448413+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:20.448602+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:21.448797+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:22.448927+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:23.449091+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:24.449261+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:25.449432+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:26.449604+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:27.449820+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:28.449983+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:29.450139+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:30.450312+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:31.450448+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:32.450653+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:33.450812+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:34.450974+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:35.451151+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:36.451335+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:37.451502+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:38.451695+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:39.451886+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:40.452108+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:41.452261+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:42.452408+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:43.452533+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:44.452803+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:45.452970+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:46.453189+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:47.453330+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:48.453466+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:49.453627+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:50.453824+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:51.453990+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:52.454199+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:53.454346+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:54.454503+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:55.454660+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:56.454821+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:57.454986+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:58.455149+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:51:59.455354+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:00.455576+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:01.455788+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:02.455955+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:03.456086+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:04.456250+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:05.456464+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:06.456592+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:07.456789+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:08.456951+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:09.457132+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:10.457388+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fbdcb000/0x0/0x4ffc00000, data 0xd676e1/0xe53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:11.457552+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:12.457675+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:13.457806+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:14.458075+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:15.458275+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076172 data_alloc: 218103808 data_used: 348160
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.817398071s of 100.149177551s, submitted: 106
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 21020672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:16.458437+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fbdc7000/0x0/0x4ffc00000, data 0xd6925e/0xe56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592a9800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 21004288 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 162 ms_handle_reset con 0x55b6592a9800 session 0x55b659245a40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:17.458577+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 21004288 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 163 ms_handle_reset con 0x55b6592db000 session 0x55b6593570e0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:18.458767+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 21004288 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:19.458927+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db400
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 20963328 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 165 ms_handle_reset con 0x55b6592db400 session 0x55b6593572c0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:20.459103+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fbdba000/0x0/0x4ffc00000, data 0xd6ed81/0xe62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101838 data_alloc: 218103808 data_used: 356352
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 20971520 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:21.459257+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573af000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 166 ms_handle_reset con 0x55b6573af000 session 0x55b659960960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 20971520 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:22.459420+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573afc00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 20971520 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:23.459571+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 166 ms_handle_reset con 0x55b6573afc00 session 0x55b659960b40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 20971520 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:24.459828+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fbdb3000/0x0/0x4ffc00000, data 0xd724ae/0xe6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592a9800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 20971520 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:25.459963+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165223 data_alloc: 218103808 data_used: 356352
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 168 ms_handle_reset con 0x55b6592a9800 session 0x55b659960f00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 20856832 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.833057404s of 10.098741531s, submitted: 67
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:26.460111+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 169 ms_handle_reset con 0x55b6592db800 session 0x55b6599614a0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 20807680 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592dbc00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:27.460244+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 170 ms_handle_reset con 0x55b6592db000 session 0x55b65947e1e0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 170 ms_handle_reset con 0x55b6592dbc00 session 0x55b659244b40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573af000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573afc00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 20766720 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:28.460453+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 171 heartbeat osd_stat(store_statfs(0x4fada9000/0x0/0x4ffc00000, data 0x1d7936b/0x1e74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 171 ms_handle_reset con 0x55b6573afc00 session 0x55b65947fa40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 171 ms_handle_reset con 0x55b6573af000 session 0x55b6599881e0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 20742144 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:29.460635+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592a9800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 172 ms_handle_reset con 0x55b6592a9800 session 0x55b6592af0e0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 20774912 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:30.460840+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125318 data_alloc: 218103808 data_used: 364544
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 173 ms_handle_reset con 0x55b6592db800 session 0x55b659547680
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 173 ms_handle_reset con 0x55b6592db000 session 0x55b6594b2b40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 20701184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:31.461005+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 20701184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:32.461182+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 20701184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:33.461328+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 174 heartbeat osd_stat(store_statfs(0x4fb98f000/0x0/0x4ffc00000, data 0xd7f941/0xe7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 20701184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:34.461494+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 20701184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:35.461661+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131805 data_alloc: 218103808 data_used: 372736
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 175 ms_handle_reset con 0x55b6592db800 session 0x55b659547c20
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:36.461822+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:37.461987+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 175 heartbeat osd_stat(store_statfs(0x4fb98b000/0x0/0x4ffc00000, data 0xd8191d/0xe82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.069460869s of 11.788814545s, submitted: 228
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:38.462078+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 176 heartbeat osd_stat(store_statfs(0x4fb988000/0x0/0x4ffc00000, data 0xd833c0/0xe85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 20635648 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:39.462239+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 20635648 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:40.462491+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140716 data_alloc: 218103808 data_used: 385024
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78946304 unmapped: 20627456 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:41.462672+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573af000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 20561920 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:42.462830+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 177 ms_handle_reset con 0x55b6573af000 session 0x55b6599b41e0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 177 heartbeat osd_stat(store_statfs(0x4fb986000/0x0/0x4ffc00000, data 0xd84f6e/0xe87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 177 heartbeat osd_stat(store_statfs(0x4fb988000/0x0/0x4ffc00000, data 0xd84b6e/0xe86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:43.463027+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:44.463191+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:45.463341+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141119 data_alloc: 218103808 data_used: 393216
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:46.463483+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:47.463643+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 177 heartbeat osd_stat(store_statfs(0x4fb988000/0x0/0x4ffc00000, data 0xd84b6e/0xe86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:48.463816+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:49.463942+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:50.464234+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141279 data_alloc: 218103808 data_used: 397312
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 177 heartbeat osd_stat(store_statfs(0x4fb988000/0x0/0x4ffc00000, data 0xd84b6e/0xe86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:51.464414+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:52.464557+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:53.464700+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:54.464864+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 177 heartbeat osd_stat(store_statfs(0x4fb988000/0x0/0x4ffc00000, data 0xd84b6e/0xe86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 177 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.678228378s of 16.802659988s, submitted: 52
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:55.465046+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:56.465245+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:57.465480+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:58.465649+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:52:59.465843+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:00.466065+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:01.466235+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:02.466430+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:03.466621+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:04.466813+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:05.466974+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:06.467217+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:07.467368+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:08.467768+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:09.467949+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:10.468176+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:11.468371+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:12.468548+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:13.468782+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:14.468953+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:15.469114+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:16.469279+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:17.469434+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:18.469596+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:19.469845+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:20.470133+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:21.470292+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:22.470647+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:23.470837+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:24.470992+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:25.471165+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:26.471338+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:27.471529+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:28.471691+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:29.471788+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:30.471941+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:31.472094+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:32.472308+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:33.472471+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:34.472639+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:35.472862+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:36.473340+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:37.474479+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:38.477082+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:39.477219+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:40.477520+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:41.477775+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:42.477912+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:43.478062+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:44.478288+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:45.478509+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:46.478672+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:47.478844+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:48.479064+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:49.479285+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:50.479507+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:51.479716+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:52.479930+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:53.480127+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 podman[322477]: 2025-10-01 14:22:59.562524836 +0000 UTC m=+0.101119502 container health_status c13c85560319b246b9406aed1b6b3015b2c424d3ffff37d0301b6634f5976b0d (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250923, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:c4414cc2680fb1bacbf99261f759f4ef7401fb2e4953140270bffdab8e002f22', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:54.480312+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:55.480486+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:56.480651+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:57.480825+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:58.481007+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:53:59.481161+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:00.481358+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:01.481526+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:02.481727+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:03.481936+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:04.482114+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:05.482265+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:06.482442+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:07.482604+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:08.482817+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:09.483018+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:10.483189+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:11.483420+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:12.483595+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:13.483852+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:14.484090+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:15.484308+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:16.484486+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:17.484699+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:18.484868+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:19.485036+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:20.485283+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:21.485493+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:22.485664+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:23.485932+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:24.486155+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:25.486335+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78921728 unmapped: 20652032 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets getting new tickets!
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:26.486675+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _finish_auth 0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:26.488022+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:27.486796+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:28.486994+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:29.487154+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:30.487382+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:31.487526+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:32.487689+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:33.487933+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:34.488127+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:35.488288+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145453 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:36.488459+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:37.488605+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 ms_handle_reset con 0x55b6561c1800 session 0x55b6593625a0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b656c8bc00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:38.488783+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:39.488956+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 20643840 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0xd865d1/0xe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:40.489150+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 178 handle_osd_map epochs [178,179], i have 178, src has [1,179]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 105.655426025s of 105.669166565s, submitted: 12
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 20635648 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149099 data_alloc: 218103808 data_used: 405504
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 179 heartbeat osd_stat(store_statfs(0x4fb980000/0x0/0x4ffc00000, data 0xd8814e/0xe8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:41.489322+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 20635648 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:42.489460+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6561c1800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 180 ms_handle_reset con 0x55b6561c1800 session 0x55b6599c6960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78970880 unmapped: 20602880 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592a9800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 180 ms_handle_reset con 0x55b6592a9800 session 0x55b6599c6b40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6561c1800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 180 ms_handle_reset con 0x55b6561c1800 session 0x55b6599c6d20
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:43.489595+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573af000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 78979072 unmapped: 20594688 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:44.489766+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 19488768 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 181 ms_handle_reset con 0x55b6573af000 session 0x55b6599c6f00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:45.489908+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 18440192 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 182 ms_handle_reset con 0x55b6592db000 session 0x55b6599c74a0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156789 data_alloc: 218103808 data_used: 413696
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:46.490045+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 182 heartbeat osd_stat(store_statfs(0x4fa7d9000/0x0/0x4ffc00000, data 0xd8d46d/0xe95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:47.490178+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:48.490376+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:49.490519+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:50.490781+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.306160927s of 10.427680016s, submitted: 37
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 183 ms_handle_reset con 0x55b6592db800 session 0x55b6599c7a40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162767 data_alloc: 218103808 data_used: 421888
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:51.490889+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592dbc00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 183 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0xd8f806/0xe99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 183 handle_osd_map epochs [183,184], i have 183, src has [1,184]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 184 ms_handle_reset con 0x55b6592dbc00 session 0x55b6599c7c20
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:52.491024+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:53.491231+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:54.491439+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:55.491614+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 185 heartbeat osd_stat(store_statfs(0x4fa7cf000/0x0/0x4ffc00000, data 0xd92666/0xe9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167991 data_alloc: 218103808 data_used: 425984
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:56.491785+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:57.491908+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:58.492064+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:54:59.492225+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:00.492408+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:01.492550+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:02.492704+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:03.492856+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:04.493029+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:05.493201+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:06.493339+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:07.493529+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:08.493714+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:09.493935+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:10.494135+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:11.494276+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:12.495681+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:13.495848+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:14.496011+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:15.496150+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:16.496321+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:17.496471+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:18.496597+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:19.496767+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:20.497005+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:21.497206+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:22.497378+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:23.497552+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:24.497728+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:25.497887+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:26.498054+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:27.498236+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:28.498395+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 ms_handle_reset con 0x55b657afe000 session 0x55b6594b7c20
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592dbc00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:29.498545+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:30.498723+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:31.498874+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:32.499042+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:33.499226+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:34.499415+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:35.499530+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:36.499705+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:37.499952+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:38.500193+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:39.500343+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:40.500633+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:41.500874+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:42.501101+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:43.501334+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:44.501562+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:45.501995+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:46.502234+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:47.503769+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:48.504102+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:49.504722+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:50.505295+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:51.505530+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:52.506932+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:53.507252+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:54.507397+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 18423808 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:55.507618+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:56.507828+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:57.508001+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:58.508186+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:55:59.508364+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:00.508771+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:01.508978+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:02.509359+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:03.509564+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:04.509748+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:05.509914+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:06.510080+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:07.510247+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:08.510380+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:09.510544+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:10.510712+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:11.510909+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:12.511071+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:13.511218+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:14.511523+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:15.511683+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:16.511859+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:17.512203+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:18.512362+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:19.512535+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:20.512719+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:21.512923+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:22.513142+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:23.513302+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:24.513487+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:25.513661+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:26.513811+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:27.513959+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:28.514181+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:29.514368+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:30.514566+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:31.514759+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:32.514939+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:33.515077+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:34.515257+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:35.515446+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:36.515609+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:37.515820+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:38.516006+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:39.516191+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:40.516380+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:41.516536+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:42.516711+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:43.516903+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 18415616 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:44.517042+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 18407424 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:45.517218+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 18407424 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:46.517363+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 18407424 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:47.517535+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 18407424 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:48.517680+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 18407424 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:49.517791+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:50.517975+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:51.518168+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:52.518347+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:53.518503+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:54.518658+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:55.518823+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:56.518978+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:57.519218+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:58.519384+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:56:59.519543+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:00.519775+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:01.519974+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:02.520137+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:03.520313+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:04.520487+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:05.520667+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:06.520835+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:07.521019+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:08.521225+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:09.521424+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:10.521644+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:11.521809+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:12.521991+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:13.522213+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:14.522399+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:15.522541+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:16.522704+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:17.522865+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:18.523018+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:19.523165+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:20.523359+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:21.523505+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:22.523685+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:23.524112+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:24.524495+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:25.525901+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:26.526203+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:27.526392+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:28.526966+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:29.527183+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:30.527423+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:31.528161+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:32.528536+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:33.528772+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:34.529131+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:35.529422+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:36.529684+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:37.529892+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:38.530070+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:39.530261+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:40.530491+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:41.530642+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:42.530788+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:43.530942+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:44.531188+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:45.531378+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:46.531652+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:47.531789+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:48.531935+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:49.532216+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:50.532392+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:51.532605+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:52.532902+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:53.533094+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:54.533515+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:55.533672+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:56.533929+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:57.534083+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:58.534205+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:57:59.534327+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:00.534493+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:01.535103+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:02.535250+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:03.535406+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:04.535579+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:05.535770+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 18382848 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:06.535923+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:07.536158+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:08.536553+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:09.536825+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:10.537056+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:11.537216+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:12.537408+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:13.537569+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:14.537715+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:15.537907+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:16.538146+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:17.538322+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:18.538641+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:19.538845+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:20.539201+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:21.539396+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:22.539574+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:23.539771+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:24.539977+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:25.540189+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:26.540352+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:27.540533+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:28.540798+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:29.540998+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:30.541261+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:31.542032+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:32.542279+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:33.542487+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:34.542658+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:35.542912+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:36.543076+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:37.543308+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:38.543496+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:39.543629+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:40.543814+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:41.543987+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:42.544159+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:43.544349+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:44.544552+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:45.544777+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:46.544959+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:47.545165+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:48.545297+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:49.545818+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:50.545998+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:51.546147+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:52.546326+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:53.546493+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:54.546635+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:55.546809+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:56.546961+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:57.547098+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:58.547279+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:58:59.547411+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:00.549965+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:01.550116+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:02.550255+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:03.550415+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:04.550581+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:05.550799+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:06.550969+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:07.551111+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:08.551246+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:09.551396+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:10.551616+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:11.551814+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:12.552054+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:13.552217+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:14.552363+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:15.552554+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:16.552704+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:17.552873+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:18.553045+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 18366464 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:19.553204+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:20.553420+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:21.553571+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:22.553743+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:23.553873+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:24.554025+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:25.554171+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7727 writes, 28K keys, 7727 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7727 writes, 1851 syncs, 4.17 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 973 writes, 2331 keys, 973 commit groups, 1.0 writes per commit group, ingest: 1.18 MB, 0.00 MB/s
                                           Interval WAL: 973 writes, 437 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:26.554360+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:27.554521+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:28.554708+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 podman[322474]: 2025-10-01 14:22:59.600529073 +0000 UTC m=+0.148117705 container health_status 583cde33fa111e3f81ff9a7adf51b7bee43cd123d54d60383b2d474b3b5066ad (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250923, org.label-schema.schema-version=1.0, tcib_build_tag=36bccb96575468ec919301205d8daa2c, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:2d1e733d24df6ca02636374147f801a0ec1509f8db2f9ad8c739b3f2341815fd', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:29.554940+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:30.555221+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:31.555444+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:32.555652+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:33.555851+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:34.556040+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:35.556191+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:36.556354+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:37.556517+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:38.556659+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:39.556833+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:40.557047+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:41.557234+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 18350080 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:42.557447+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 18407424 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:43.557631+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 18407424 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:44.557801+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 18407424 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:45.558006+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:46.558215+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:47.558388+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:48.558596+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:49.558811+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 18399232 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:50.558995+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:51.559137+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:52.559306+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:53.559467+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:54.559645+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:55.559809+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:56.559923+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:57.560087+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:58.560275+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T13:59:59.560454+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:00.560665+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:01.560956+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 18391040 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:02.561152+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:03.561312+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:04.561460+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:05.561621+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:06.561819+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:07.562008+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:08.562205+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:09.562365+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:10.562614+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:11.562820+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:12.562936+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:13.563122+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:14.563352+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:15.563525+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:16.563691+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:17.563869+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:18.564024+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:19.564178+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:20.564356+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:21.564523+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 18374656 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:22.564717+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:23.564897+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:24.565054+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:25.565208+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:26.565374+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:27.565568+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:28.565684+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:29.565851+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:30.566063+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:31.566259+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:32.566451+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1171285 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:33.566640+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:34.566782+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cc000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:35.566943+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 345.145324707s of 345.219573975s, submitted: 34
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 18358272 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:36.567119+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 18243584 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:37.567293+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:38.567821+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:39.567987+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:40.568188+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:41.568360+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:42.568589+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:43.568823+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:44.568978+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:45.569145+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:46.569331+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:47.569521+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:48.569697+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:49.569871+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:50.570104+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:51.570222+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:52.570360+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:53.570557+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:54.570781+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:55.570897+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:56.571042+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:57.571188+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:58.571390+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:00:59.571547+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:00.571714+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:01.571878+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:02.572037+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:03.572228+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:04.572442+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:05.572558+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:06.572718+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:07.572915+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:08.573069+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:09.573265+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:10.573491+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:11.573651+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:12.573808+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:13.573960+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:14.574151+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:15.574319+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:16.574509+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:17.574688+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 18186240 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:18.574838+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:19.574998+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:20.575210+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:21.575342+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:22.575506+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:23.575672+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:24.575861+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:25.575991+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:26.576137+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:27.576268+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:28.576431+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:29.576603+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:30.576803+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:31.576963+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:32.577103+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:33.577248+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:34.577412+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:35.577579+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 18178048 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:36.577838+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:37.578019+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:38.578172+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:39.578517+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:40.578975+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:41.579264+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:42.579502+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:43.580015+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:44.580263+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:45.581231+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:46.581508+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:47.581800+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:48.582651+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:49.583366+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:50.583811+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:51.584032+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:52.584217+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:53.584793+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:54.584917+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:55.585086+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:56.585428+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:57.585787+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:58.585944+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:01:59.586160+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:00.586462+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:01.586648+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:02.586812+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:03.586997+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:04.587134+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:05.587277+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:06.587410+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:07.587576+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:08.587795+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:09.587925+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:10.588129+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:11.588276+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:12.588441+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:13.588680+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:14.588872+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:15.589098+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:16.589423+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:17.589642+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:18.589997+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:19.590194+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:20.590509+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:21.590831+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:22.591201+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:23.591418+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:24.591597+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:25.591751+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:26.592020+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:27.592193+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:28.592528+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:29.592792+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:30.593031+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:31.593267+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:32.593477+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:33.593678+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:34.594201+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:35.594309+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:36.594467+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:37.594679+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:38.594887+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:39.595079+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:40.595794+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:41.595954+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:42.596100+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:43.596378+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:44.596576+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:45.598343+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:46.599498+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:47.600380+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:48.600884+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:49.601794+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:50.602834+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:51.602987+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:52.603304+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:53.603511+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:54.604025+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:55.604595+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:56.604758+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:57.605224+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:58.605467+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:02:59.605787+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:00.606057+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:01.606272+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 18169856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:02.606438+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:03.606650+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:04.606894+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:05.607136+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:06.607375+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:07.607641+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:08.607889+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:09.608252+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:10.608488+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:11.608691+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:12.608920+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:13.609160+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:14.609347+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:15.609608+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:16.609804+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:17.609993+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:18.610167+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:19.610350+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:20.610592+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:21.610768+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:22.611010+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:23.611196+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:24.611359+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:25.611503+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:26.611641+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:27.611771+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:28.611926+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:29.612112+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:30.612296+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:31.612364+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:32.612536+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:33.612689+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:34.612849+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:35.613048+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:36.613251+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:37.613393+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:38.613541+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:39.613671+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:40.613870+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:41.614102+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:42.614264+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:43.614429+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:44.614635+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:45.614796+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:46.614938+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:47.615083+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:48.615203+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:49.615404+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:50.615607+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:51.615770+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:52.615917+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:53.616056+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:54.616200+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:55.616330+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:56.616455+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:57.616704+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:58.616927+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:03:59.617191+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:00.617542+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:01.617898+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:02.618173+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:03.618380+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:04.618700+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:05.619089+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:06.619321+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:07.619583+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:08.619946+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:09.620218+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:10.620474+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:11.620757+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:12.621034+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:13.621282+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:14.621556+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:15.621869+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:16.622119+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:17.622306+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:18.622581+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:19.622875+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:20.623111+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:21.623356+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:22.623640+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:23.623893+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:24.624135+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 18161664 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:25.624552+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:26.624846+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:27.625054+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:28.625443+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:29.625617+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:30.625891+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:31.626135+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:32.626303+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:33.626568+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:34.626764+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:35.626974+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:36.627219+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:37.627409+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:38.627585+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:39.627774+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:40.627944+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:41.628116+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:42.628274+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:43.628473+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:44.628629+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:45.628782+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 18153472 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:46.628895+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 18137088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:47.629079+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 18137088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:48.629231+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 18137088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:49.629420+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 18137088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:50.629629+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 18137088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:51.629818+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 18137088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:52.630046+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 18137088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:53.630137+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 18137088 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:54.630275+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:55.630407+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:56.630532+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:57.630667+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:58.630808+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:04:59.630961+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:00.631177+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:01.631282+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:02.631416+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:03.631497+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:04.631660+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:05.631818+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:06.631971+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:07.632103+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:08.632259+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:09.632401+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:10.632553+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:11.632693+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:12.632858+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:13.633007+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:14.633174+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:15.633320+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:16.633478+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:17.633658+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:18.633820+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:19.633996+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:20.634175+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:21.634311+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:22.634457+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:23.634610+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:24.634790+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:25.634975+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:26.635101+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:27.635297+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:28.635468+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:29.635690+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:30.635910+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:31.636087+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:32.636218+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:33.636327+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:34.636486+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:35.636693+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:36.636897+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:37.637071+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:38.637208+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:39.637363+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:40.637574+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:41.637782+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:42.637952+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:43.638132+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:44.638289+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:45.638491+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:46.638639+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:47.638794+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:48.639003+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:49.639178+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:50.639412+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:51.639618+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:52.639806+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:53.640042+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:54.640216+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:55.640385+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:56.640566+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 18128896 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:57.640714+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:58.641015+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:05:59.641398+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:00.641643+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:01.641857+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:02.642234+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:03.642557+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:04.642977+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:05.643264+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:06.643554+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:07.643804+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:08.643984+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:09.644222+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:10.644437+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:11.644581+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:12.644785+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:13.644902+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:14.645040+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:15.645204+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:16.645348+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:17.645545+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:18.645686+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:19.645886+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:20.646174+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:21.646361+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:22.646553+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:23.646787+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:24.647029+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:25.647230+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:26.647427+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:27.647623+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:28.647826+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:29.648020+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:30.650164+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:31.651978+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:32.653556+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:33.654906+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:34.656084+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:35.657126+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:36.658085+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:37.658896+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:38.659117+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:39.659317+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:40.659533+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:41.659997+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:42.660290+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:43.660569+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:44.660810+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:45.661096+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:46.661328+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:47.661550+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:48.661779+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:49.662101+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:50.662418+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:51.662584+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:52.662873+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:53.663127+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:54.663332+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:55.663585+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:56.663781+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:57.663997+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:58.664238+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:06:59.664456+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:00.664800+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:01.664966+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:02.665144+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:03.665279+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:04.665472+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:05.665608+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:06.665986+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:07.666139+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:08.666446+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:09.666891+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:10.667259+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:11.667564+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:12.667748+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:13.667955+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:14.668166+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:15.668377+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:16.668651+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:17.668821+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:18.668964+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:19.669227+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:20.669497+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:21.669704+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:22.669869+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:23.670032+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:24.670287+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:25.670419+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:26.670589+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:27.670799+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:28.671005+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:29.671199+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:30.671434+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:31.671579+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:32.671789+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:33.671966+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:34.672133+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:35.672284+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:36.672470+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:37.672613+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:38.672793+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:39.672971+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:40.673151+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:41.673350+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:42.673479+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:43.673633+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b657afe000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 18120704 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170405 data_alloc: 218103808 data_used: 434176
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 186 handle_osd_map epochs [186,187], i have 186, src has [1,187]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 427.744934082s of 428.238677979s, submitted: 106
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:44.673911+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 187 ms_handle_reset con 0x55b657afe000 session 0x55b6599c6f00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 187 heartbeat osd_stat(store_statfs(0x4fa7cd000/0x0/0x4ffc00000, data 0xd940c9/0xea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:45.674047+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:46.674214+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:47.674361+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:48.674510+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174579 data_alloc: 218103808 data_used: 442368
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 187 heartbeat osd_stat(store_statfs(0x4fa7c9000/0x0/0x4ffc00000, data 0xd95c9a/0xea4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:49.674634+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:50.674836+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:51.674998+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 187 heartbeat osd_stat(store_statfs(0x4fa7c9000/0x0/0x4ffc00000, data 0xd95c9a/0xea4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:52.675168+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:53.675340+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177553 data_alloc: 218103808 data_used: 442368
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:54.675550+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:55.675776+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:56.675966+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:57.676125+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:58.676302+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177553 data_alloc: 218103808 data_used: 442368
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:07:59.676481+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:00.676849+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:01.677020+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:02.677244+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:03.677450+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177553 data_alloc: 218103808 data_used: 442368
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:04.677621+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:05.677803+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:06.677986+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:07.678151+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:08.678342+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:09.678506+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:10.678808+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:11.679012+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:12.679157+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:13.679289+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:14.679468+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:15.679602+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:16.679749+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:17.679891+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:18.680013+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:19.680165+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:20.680390+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:21.680606+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:22.680812+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:23.680995+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:24.681199+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:25.681402+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:26.681557+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:27.681759+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:28.681951+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:29.682092+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:30.682247+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:31.682388+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:32.682525+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:33.682696+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:34.682810+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:35.682917+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:36.683051+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:37.683201+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:38.683404+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:39.683635+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:40.683848+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:41.684083+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 18096128 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:42.684335+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:43.684518+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:44.685239+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:45.685463+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:46.685640+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:47.685864+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:48.686066+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:49.686221+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:50.686536+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:51.686770+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:52.686982+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:53.687154+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:54.687432+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:55.687615+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:56.687790+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:57.687977+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:58.688205+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:08:59.688321+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:00.688527+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:01.688702+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:02.688950+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:03.689097+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:04.689247+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:05.689406+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:06.689571+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:07.689774+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:08.689934+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:09.690115+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:10.690338+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:11.690516+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:12.690670+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:13.690833+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:14.690984+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:15.691192+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:16.691387+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:17.691588+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:18.691830+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:19.692029+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:20.692269+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:21.692547+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:22.692787+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:23.692955+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:24.693150+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 18087936 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:25.693335+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 7978 writes, 29K keys, 7978 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7978 writes, 1972 syncs, 4.05 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 251 writes, 437 keys, 251 commit groups, 1.0 writes per commit group, ingest: 0.18 MB, 0.00 MB/s
                                           Interval WAL: 251 writes, 121 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 18079744 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:26.693506+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 18079744 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:27.693723+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 18079744 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:28.693945+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 18079744 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:29.694126+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 18079744 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:30.694412+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 18079744 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:31.694617+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: mgrc ms_handle_reset ms_handle_reset con 0x55b659413400
Oct 01 14:22:59 compute-0 ceph-osd[88455]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2102413293
Oct 01 14:22:59 compute-0 ceph-osd[88455]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2102413293,v1:192.168.122.100:6801/2102413293]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: get_auth_request con 0x55b6592a9800 auth_method 0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: mgrc handle_mgr_configure stats_period=5
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:32.694838+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:33.695021+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:34.695209+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:35.695390+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:36.695561+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 ms_handle_reset con 0x55b656c8bc00 session 0x55b6599c63c0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573afc00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:37.695753+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:38.696236+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:39.696357+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:40.696511+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:41.696709+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:42.696935+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:43.697059+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b656c8bc00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:44.697222+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177713 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 heartbeat osd_stat(store_statfs(0x4fa7c6000/0x0/0x4ffc00000, data 0xd976fd/0xea7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 17940480 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 120.560012817s of 121.113166809s, submitted: 24
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:45.697327+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 189 ms_handle_reset con 0x55b656c8bc00 session 0x55b659547c20
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 17932288 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:46.697513+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 17932288 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:47.697669+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573af000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 17948672 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:48.697833+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 190 ms_handle_reset con 0x55b6573af000 session 0x55b659a1cf00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 17989632 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:49.697998+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099507 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 190 heartbeat osd_stat(store_statfs(0x4fb431000/0x0/0x4ffc00000, data 0x12ae7c/0x23c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 17989632 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:50.698260+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 17989632 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:51.698469+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 17989632 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:52.698632+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 17989632 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:53.698820+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 17989632 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:54.699055+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099507 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 190 heartbeat osd_stat(store_statfs(0x4fb431000/0x0/0x4ffc00000, data 0x12ae7c/0x23c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 17981440 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:55.699246+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 17981440 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:56.699441+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:57.699610+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 17981440 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.431668282s of 12.534594536s, submitted: 40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:58.699796+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 17981440 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:09:59.699956+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 17981440 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105455 data_alloc: 218103808 data_used: 446464
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:00.700194+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 17981440 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 ms_handle_reset con 0x55b6592db000 session 0x55b659a1d4a0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb42b000/0x0/0x4ffc00000, data 0x12e46f/0x243000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb42b000/0x0/0x4ffc00000, data 0x12e46f/0x243000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:01.700431+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 17981440 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:02.700643+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:03.700849+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:04.701053+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112369 data_alloc: 218103808 data_used: 454656
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:05.701218+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:06.701414+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:07.701587+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:08.701775+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:09.701896+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113009 data_alloc: 218103808 data_used: 471040
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:10.702105+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:11.702306+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:12.702491+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:13.702669+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:14.702889+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113009 data_alloc: 218103808 data_used: 471040
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:15.703080+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:16.703270+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:17.703439+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:18.703609+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:19.703816+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113009 data_alloc: 218103808 data_used: 471040
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:20.704013+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:21.704142+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:22.704315+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:23.704495+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:24.704687+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113009 data_alloc: 218103808 data_used: 471040
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:25.704873+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:26.704992+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:27.705089+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:28.705278+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 ms_handle_reset con 0x55b6592dbc00 session 0x55b6599c6960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b656c8bc00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:29.705482+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113009 data_alloc: 218103808 data_used: 471040
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:30.705723+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:31.705912+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:32.706073+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb426000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:33.706204+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:34.706420+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 17973248 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113009 data_alloc: 218103808 data_used: 471040
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.109706879s of 37.140201569s, submitted: 18
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:35.706582+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 17866752 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:36.706779+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 17842176 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,1])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:37.706920+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:38.707088+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:39.707270+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:40.707446+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:41.707616+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:42.707791+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:43.708154+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:44.708318+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:45.708491+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:46.708642+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:47.708801+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:48.708949+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:49.709071+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:50.709275+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:51.709472+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:52.709612+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:53.709802+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:54.709981+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:55.710124+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:56.710314+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:57.710435+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:58.710604+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:10:59.710753+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:00.710936+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:01.711080+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:02.711274+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:03.711515+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:04.711710+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:05.711945+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:06.712135+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:07.712316+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:08.712491+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:09.712651+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:10.712875+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:11.713063+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:12.713246+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:13.713420+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:14.713613+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:15.713807+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 17645568 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:16.713973+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 17637376 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:17.714214+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 17637376 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:18.714403+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 17637376 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:19.714606+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 17637376 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:20.714771+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 17637376 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:21.714924+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:22.715057+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:23.715190+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:24.715353+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:25.715580+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:26.715826+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:27.715994+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:28.716152+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:29.716327+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:30.716568+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:31.716863+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:32.717052+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:33.717182+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:34.717375+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:35.717601+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:36.717821+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:37.717982+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:38.718171+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:39.718395+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113857 data_alloc: 218103808 data_used: 524288
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:40.718724+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:41.718969+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:42.719215+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 17629184 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:43.719383+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 heartbeat osd_stat(store_statfs(0x4fb427000/0x0/0x4ffc00000, data 0x13000f/0x247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573af000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 90333184 unmapped: 9240576 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 68.327560425s of 68.754341125s, submitted: 132
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:44.719533+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 ms_handle_reset con 0x55b6573af000 session 0x55b659a1da40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 25903104 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b657afe000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260620 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:45.719687+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 ms_handle_reset con 0x55b657afe000 session 0x55b659a1de00
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 25747456 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:46.719903+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 25747456 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:47.720097+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 25747456 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:48.720284+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 25747456 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:49.720489+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 25747456 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350130 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:50.720864+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 25747456 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:51.721058+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:52.721241+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:53.721463+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:54.721674+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350130 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:55.721869+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:56.722069+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:57.722237+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:58.722467+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:11:59.722622+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350130 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:00.723101+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:01.723431+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:02.723656+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:03.723803+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:04.724036+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350130 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:05.724221+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:06.724430+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:07.724651+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:08.724861+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:09.725091+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x221372c/0x232e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350130 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:10.725365+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.975107193s of 27.182064056s, submitted: 20
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:11.725487+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 25714688 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 196 ms_handle_reset con 0x55b6592db000 session 0x55b6599c7a40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:12.725698+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 25714688 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:13.725860+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 196 heartbeat osd_stat(store_statfs(0x4f933e000/0x0/0x4ffc00000, data 0x22151c9/0x232f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 25714688 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:14.726033+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 25714688 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350674 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:15.726185+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 25714688 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:16.726366+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 25714688 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:17.726543+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 25714688 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:18.726708+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:19.726903+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:20.727111+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:21.727241+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:22.727402+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:23.727567+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:24.727752+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:25.728014+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:26.728168+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:27.728344+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:28.728623+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:29.728777+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:30.728988+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:31.729177+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:32.729427+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:33.729635+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:34.729907+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:35.730111+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:36.730291+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:37.730471+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:38.730872+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:39.731119+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:40.731407+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:41.731573+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:42.731825+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:43.731997+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:44.732174+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:45.732333+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:46.732486+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:47.732673+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:48.732832+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:49.732971+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:50.733222+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:51.733361+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:52.733563+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:53.733703+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:54.733917+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:55.734094+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:56.734257+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:57.736140+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:58.738011+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:12:59.738846+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:00.739080+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:01.739346+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:02.739658+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:03.739971+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:04.740160+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:05.740717+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:06.741029+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:07.741266+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:08.741389+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:09.741558+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:10.741794+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:11.741896+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:12.742101+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:13.742255+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:14.742398+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:15.742683+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:16.742877+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:17.743214+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:18.743414+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:19.743578+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:20.743818+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:21.744021+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:22.744264+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:23.744446+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:24.744651+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:25.744820+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:26.744992+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:27.745127+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:28.745350+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:29.745492+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:30.745675+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:31.745791+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:32.745931+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:33.746089+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:34.746235+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:35.746375+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:36.746481+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:37.746612+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:38.746709+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:39.746927+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:40.747121+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:41.747306+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:42.747484+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:43.747622+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:44.747787+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:45.747922+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:46.748054+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:47.748180+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:48.748323+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:49.748457+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:50.748693+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:51.748809+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:52.748938+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:53.749125+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:54.749285+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:55.749435+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:56.749622+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:57.749768+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:58.749915+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:13:59.750092+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:00.750312+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:01.750411+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:02.750533+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:03.750710+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:04.750923+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:05.751129+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:06.751306+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:07.751432+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:08.751564+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:09.751801+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:10.752009+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:11.752224+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:12.752339+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:13.752533+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:14.752713+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:15.752906+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:16.753032+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:17.753174+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:18.753312+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:19.753497+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:20.753677+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:21.753910+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:22.754068+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:23.754206+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:24.754339+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:25.754483+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:26.754701+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:27.754855+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:28.755048+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:29.755230+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:30.755439+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:31.755586+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:32.755751+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:33.756111+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:34.756302+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:35.756818+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:36.757241+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:37.757501+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:38.758498+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:39.758817+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:40.759189+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:41.759806+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:42.760053+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:43.760231+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:44.760497+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:45.760782+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:46.761017+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:47.761250+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:48.761443+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:49.761665+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:50.761939+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:51.762145+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:52.762387+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:53.762551+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:54.762683+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:55.762866+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:56.763057+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:57.763260+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:58.763412+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:14:59.763645+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:00.763852+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:01.764016+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 01 14:22:59 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1228637702' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:02.764199+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:03.764365+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:04.764522+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:05.764677+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:06.764846+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:07.764995+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:08.765155+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:09.765367+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:10.765851+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:11.766054+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:12.766234+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:13.766309+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:14.766461+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:15.766597+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:16.766775+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:17.766917+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:18.767049+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:19.767148+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:20.767344+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:21.767478+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:22.767625+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:23.767808+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:24.767995+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:25.768116+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:26.768241+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:27.768344+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:28.768457+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:29.768680+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:30.768909+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:31.769064+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:32.769245+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:33.769427+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:34.769639+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:35.769804+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:36.769961+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:37.770110+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:38.770253+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:39.770411+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:40.770668+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:41.770866+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:42.771041+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:43.771174+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:44.771377+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:45.771552+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:46.771781+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:47.771958+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:48.772138+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:49.772291+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:50.772506+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:51.772675+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:52.772864+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:53.773016+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:54.773184+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:55.773335+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:56.773477+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:57.773638+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:58.773846+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:15:59.774024+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:00.774299+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:01.774430+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:02.774563+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:03.774831+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:04.775048+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:05.775221+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:06.775453+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:07.775687+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:08.775850+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:09.776002+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:10.776182+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:11.776352+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:12.776539+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:13.776776+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:14.776967+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:15.777149+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:16.777314+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:17.777512+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:18.777672+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:19.777859+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:20.778070+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:21.778228+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:22.778391+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:23.778628+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:24.778850+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:25.779048+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:26.779243+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:27.779389+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:28.779585+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:29.779810+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:30.780004+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:31.780155+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:32.780314+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:33.780463+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:34.780628+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:35.780803+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:36.780939+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:37.781082+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:38.781216+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:39.781364+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:40.781564+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:41.781721+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:42.781920+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:43.782085+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:44.782254+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:45.782476+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:46.782642+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:47.782816+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:48.782982+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:49.783144+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:50.783389+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:51.783600+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:52.783780+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:53.783891+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:54.784052+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:55.784213+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:56.784367+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:57.784517+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:58.784670+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:16:59.784818+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:00.785097+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:01.785268+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:02.785448+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:03.785614+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:04.785861+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:05.786021+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:06.786210+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:07.786410+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:08.786576+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:09.786709+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:10.786885+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:11.787048+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:12.787233+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:13.787401+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:14.787565+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:15.787785+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:16.787957+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:17.788123+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:18.788278+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:19.788480+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:20.788668+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:21.788856+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:22.789013+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:23.789171+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:24.789348+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:25.789564+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:26.789776+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:27.789975+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:28.790138+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:29.790310+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:30.790493+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:31.790661+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:32.790861+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:33.791015+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:34.791194+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:35.791310+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:36.791452+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:37.791664+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:38.791797+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:39.791944+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:40.792162+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:41.792265+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:42.792389+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:43.792539+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:44.792811+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:45.792955+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:46.793083+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:47.793215+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:48.793376+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:49.793498+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:50.793654+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:51.793796+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:52.793954+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:53.794132+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:54.794316+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:55.794446+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:56.794604+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:57.794772+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:58.794905+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:17:59.795088+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:00.795290+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:01.795463+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:02.795605+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:03.795832+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:04.796015+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:05.796254+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:06.796402+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:07.796559+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:08.796727+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:09.796950+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:10.797196+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:11.797376+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:12.797543+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:13.797704+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:14.797880+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:15.798060+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:16.798181+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:17.798360+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:18.798835+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:19.799134+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:20.800658+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:21.801777+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:22.802257+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:23.802579+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:24.803499+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:25.803872+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:26.804456+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:27.804881+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:28.805210+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:29.805536+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:30.805807+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:31.806007+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:32.806286+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:33.806566+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:34.806825+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:35.807031+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:36.807221+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:37.807416+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:38.807588+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:39.807807+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:40.808013+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:41.808287+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:42.808524+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:43.808726+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:44.808994+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:45.809196+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:46.809411+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:47.809620+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:48.809797+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 25739264 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:49.809952+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:50.810193+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:51.810359+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:52.810518+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:53.810676+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:54.811606+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:55.811788+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:56.811925+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:57.812126+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:58.812256+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:18:59.812848+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:00.813047+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:01.813246+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:02.813375+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:03.813512+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:04.813631+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:05.813824+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:06.814003+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:07.814214+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:08.814351+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:09.814522+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:10.814775+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:11.815025+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:12.815220+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:13.815416+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:14.815597+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:15.815832+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:16.816041+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:17.816214+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:18.816358+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:19.816548+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:20.816774+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:21.816912+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:22.817266+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:23.818780+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:24.819565+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:25.819710+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 8418 writes, 30K keys, 8418 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8418 writes, 2178 syncs, 3.87 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 440 writes, 1113 keys, 440 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s
                                           Interval WAL: 440 writes, 206 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:26.819952+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:27.820123+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:28.820506+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:29.820677+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:30.821340+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:31.821555+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:32.821829+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:33.821985+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:34.822476+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:35.822646+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:36.823082+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:37.823244+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:38.823626+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:39.823801+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:40.824178+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:41.824370+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:42.824555+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:43.824723+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:44.824932+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:45.825099+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:46.825879+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:47.826017+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:48.826247+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:49.826397+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:50.826638+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:51.826764+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:52.826904+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:53.827057+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:54.827249+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:55.827531+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:56.827863+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 25731072 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6592db800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1353648 data_alloc: 218103808 data_used: 532480
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 465.681579590s of 466.144317627s, submitted: 24
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 198 ms_handle_reset con 0x55b6592db800 session 0x55b6599c6780
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:57.828126+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 198 heartbeat osd_stat(store_statfs(0x4f933b000/0x0/0x4ffc00000, data 0x2216c2c/0x2332000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 25714688 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 198 heartbeat osd_stat(store_statfs(0x4fa7a8000/0x0/0x4ffc00000, data 0xda87fd/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:58.828280+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 25714688 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b658a63800
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:19:59.829245+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 25706496 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:00.829502+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 25706496 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fa7a9000/0x0/0x4ffc00000, data 0xda87fd/0xec5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:01.829639+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222252 data_alloc: 218103808 data_used: 544768
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:02.829793+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 25698304 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:03.830046+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 25665536 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fb416000/0x0/0x4ffc00000, data 0x13a3ce/0x258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,1])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:04.830203+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 199 ms_handle_reset con 0x55b658a63800 session 0x55b658ae6000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 25665536 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fb416000/0x0/0x4ffc00000, data 0x13a3ce/0x258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 199 heartbeat osd_stat(store_statfs(0x4fb417000/0x0/0x4ffc00000, data 0x13a3ab/0x257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:05.830413+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 25665536 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:06.830545+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 25665536 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137868 data_alloc: 218103808 data_used: 540672
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:07.830723+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 25665536 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:08.830895+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b6573af000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 90693632 unmapped: 17276928 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.620023727s of 11.763409615s, submitted: 44
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:09.831079+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 ms_handle_reset con 0x55b6573af000 session 0x55b658ae6b40
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:10.831312+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:11.831483+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200536 data_alloc: 218103808 data_used: 548864
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:12.831673+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:13.831872+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:14.832050+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:15.832217+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:16.832379+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200696 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:17.832572+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:18.832824+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:19.833091+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:20.833438+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:21.833641+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200696 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:22.833848+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:23.834038+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:24.834240+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:25.834409+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:26.834642+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200696 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:27.834889+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:28.835074+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:29.835213+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:30.835482+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:31.835646+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200696 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:32.835797+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:33.835909+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac0f000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:34.836089+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:35.836209+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 25567232 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.380243301s of 27.433187485s, submitted: 15
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:36.836343+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 25526272 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199144 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:37.836514+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 25477120 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:38.836632+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac11000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 25444352 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:39.836825+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:40.837002+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:41.837172+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199144 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:42.837320+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:43.837470+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac11000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:44.838570+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:45.838812+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:46.839011+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199144 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:47.839166+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:48.839287+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac11000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:49.839423+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:50.839606+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:51.839823+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199144 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:52.839990+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac11000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:53.840155+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac11000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:54.840358+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:55.840520+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:56.840663+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac11000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199144 data_alloc: 218103808 data_used: 552960
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:57.840797+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:58.840994+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 heartbeat osd_stat(store_statfs(0x4fac11000/0x0/0x4ffc00000, data 0x93d98b/0xa5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:20:59.841175+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: handle_auth_request added challenge on 0x55b657afe000
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 25378816 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _renew_subs
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.663669586s of 24.019950867s, submitted: 106
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 202 ms_handle_reset con 0x55b657afe000 session 0x55b658ae6d20
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:00.841857+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:01.842076+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149742 data_alloc: 218103808 data_used: 561152
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:02.843059+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:03.843598+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:04.843862+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 202 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x13f55c/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:05.844286+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:06.844671+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149742 data_alloc: 218103808 data_used: 561152
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:07.845005+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:08.845246+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:09.845438+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:10.845625+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 202 heartbeat osd_stat(store_statfs(0x4fb40d000/0x0/0x4ffc00000, data 0x13f55c/0x260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 202 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.349782944s of 10.416832924s, submitted: 14
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:11.845787+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152716 data_alloc: 218103808 data_used: 561152
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:12.845904+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:13.846070+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:14.846184+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:15.846303+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:16.846439+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 25346048 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:17.846578+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:18.846866+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:19.847163+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:20.847426+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:21.847597+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:22.847795+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:23.848001+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:24.848144+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:25.848356+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:26.848582+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:27.848775+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:28.848971+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:29.849165+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:30.849464+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:31.849650+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:32.850839+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:33.851061+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:34.851419+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:35.851685+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:36.852067+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:37.852269+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:38.852607+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:39.852884+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:40.853155+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:41.853602+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:42.853865+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:43.854176+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:44.854393+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:45.854590+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:46.854821+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:47.855159+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:48.855491+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:49.855719+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:50.856038+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:51.856223+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:52.856371+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:53.856580+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:54.856759+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:55.856959+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:56.857102+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:57.857220+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:58.857349+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:21:59.857536+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:00.857723+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:01.857950+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:02.858119+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:03.858261+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:04.858426+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:05.858602+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:06.858722+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:07.858837+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:08.858988+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:09.859117+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:10.859271+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:11.859394+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:12.859532+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:13.859643+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:14.859755+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:15.859895+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:16.860266+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:17.860380+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:18.860501+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:19.860620+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:20.860781+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:21.860894+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:22.861011+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:23.861243+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:24.861394+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: osd.0 203 heartbeat osd_stat(store_statfs(0x4fb40a000/0x0/0x4ffc00000, data 0x140fbf/0x263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 25337856 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:25.861544+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 25092096 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:26.861703+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: do_command 'config diff' '{prefix=config diff}'
Oct 01 14:22:59 compute-0 ceph-osd[88455]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 01 14:22:59 compute-0 ceph-osd[88455]: do_command 'config show' '{prefix=config show}'
Oct 01 14:22:59 compute-0 ceph-osd[88455]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 01 14:22:59 compute-0 ceph-osd[88455]: do_command 'counter dump' '{prefix=counter dump}'
Oct 01 14:22:59 compute-0 ceph-osd[88455]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 01 14:22:59 compute-0 ceph-osd[88455]: do_command 'counter schema' '{prefix=counter schema}'
Oct 01 14:22:59 compute-0 ceph-osd[88455]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 24920064 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:27.861778+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 01 14:22:59 compute-0 ceph-osd[88455]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 01 14:22:59 compute-0 ceph-osd[88455]: bluestore.MempoolThread(0x55b6551c1b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152876 data_alloc: 218103808 data_used: 565248
Oct 01 14:22:59 compute-0 ceph-osd[88455]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 24559616 heap: 107970560 old mem: 2845415832 new mem: 2845415832
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: tick
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_tickets
Oct 01 14:22:59 compute-0 ceph-osd[88455]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-01T14:22:28.861889+0000)
Oct 01 14:22:59 compute-0 ceph-osd[88455]: do_command 'log dump' '{prefix=log dump}'
Oct 01 14:22:59 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:23:00 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15275 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:00 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/1228637702' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 01 14:23:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 01 14:23:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/850278593' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 01 14:23:00 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 01 14:23:00 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2384584577' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 01 14:23:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 01 14:23:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926157466' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 01 14:23:01 compute-0 ceph-mon[74802]: pgmap v2419: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:23:01 compute-0 ceph-mon[74802]: from='client.15275 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/850278593' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 01 14:23:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2384584577' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 01 14:23:01 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2926157466' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 01 14:23:01 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 01 14:23:01 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660012914' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 01 14:23:01 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:23:02 compute-0 systemd[1]: Starting Hostname Service...
Oct 01 14:23:02 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15285 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:02 compute-0 systemd[1]: Started Hostname Service.
Oct 01 14:23:02 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2660012914' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 01 14:23:02 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 01 14:23:02 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915629205' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 01 14:23:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 01 14:23:03 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244702956' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 01 14:23:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 01 14:23:03 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15291 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:03 compute-0 ceph-mon[74802]: pgmap v2420: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:23:03 compute-0 ceph-mon[74802]: from='client.15285 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:03 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/915629205' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 01 14:23:03 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/4244702956' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 01 14:23:03 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:23:03 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 01 14:23:03 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848129416' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 01 14:23:04 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15295 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:04 compute-0 ceph-mon[74802]: from='client.15291 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:04 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3848129416' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 01 14:23:04 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15297 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:04 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 01 14:23:04 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/496366015' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 01 14:23:05 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 01 14:23:05 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244728543' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 01 14:23:05 compute-0 ceph-mon[74802]: pgmap v2421: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:23:05 compute-0 ceph-mon[74802]: from='client.15295 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:05 compute-0 ceph-mon[74802]: from='client.15297 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:05 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/496366015' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 01 14:23:05 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/2244728543' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.589525) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328585589694, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 622, "num_deletes": 251, "total_data_size": 524332, "memory_usage": 536888, "flush_reason": "Manual Compaction"}
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328585594144, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 518690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48461, "largest_seqno": 49082, "table_properties": {"data_size": 515249, "index_size": 1221, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9331, "raw_average_key_size": 20, "raw_value_size": 507893, "raw_average_value_size": 1121, "num_data_blocks": 53, "num_entries": 453, "num_filter_entries": 453, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759328559, "oldest_key_time": 1759328559, "file_creation_time": 1759328585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 4688 microseconds, and 1810 cpu microseconds.
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.594218) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 518690 bytes OK
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.594231) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.595521) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.595533) EVENT_LOG_v1 {"time_micros": 1759328585595529, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.595547) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 520711, prev total WAL file size 520711, number of live WAL files 2.
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.596015) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(506KB)], [113(10MB)]
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328585596038, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11479619, "oldest_snapshot_seqno": -1}
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6448 keys, 9722767 bytes, temperature: kUnknown
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328585653719, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9722767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9679196, "index_size": 26373, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 168447, "raw_average_key_size": 26, "raw_value_size": 9561852, "raw_average_value_size": 1482, "num_data_blocks": 1044, "num_entries": 6448, "num_filter_entries": 6448, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759324077, "oldest_key_time": 0, "file_creation_time": 1759328585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e1fb0bd7-c6bf-40f2-8bfb-ffd039289f42", "db_session_id": "NJZTWL88H5HSB4Q4NEC9", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.654030) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9722767 bytes
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.655828) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.7 rd, 168.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(40.9) write-amplify(18.7) OK, records in: 6961, records dropped: 513 output_compression: NoCompression
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.655856) EVENT_LOG_v1 {"time_micros": 1759328585655844, "job": 68, "event": "compaction_finished", "compaction_time_micros": 57788, "compaction_time_cpu_micros": 21361, "output_level": 6, "num_output_files": 1, "total_output_size": 9722767, "num_input_records": 6961, "num_output_records": 6448, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328585656145, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759328585659557, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.595912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.659699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.659710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.659712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.659714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:23:05 compute-0 ceph-mon[74802]: rocksdb: (Original Log Time 2025/10/01-14:23:05.659717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 01 14:23:05 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15303 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:05 compute-0 ceph-mgr[75103]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 456 KiB data, 220 MiB used, 60 GiB / 60 GiB avail
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: log_channel(audit) log [DBG] : from='client.15305 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct 01 14:23:06 compute-0 ceph-mgr[75103]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct 01 14:23:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 01 14:23:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805047939' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 01 14:23:06 compute-0 ceph-mon[74802]: from='client.15303 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 01 14:23:06 compute-0 ceph-mon[74802]: from='client.? 192.168.122.100:0/3805047939' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 01 14:23:06 compute-0 ceph-mon[74802]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 01 14:23:06 compute-0 ceph-mon[74802]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3113485275' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
